Topics most predictive of favorable overall assessment in outpatient radiology

Background Patients’ subjective experiences during clinical interactions may affect their engagement in healthcare, and better understanding of the issues patients consider most important may help improve service quality and patient-staff relationships. While diagnostic imaging is a growing component of healthcare utilization, few studies have quantitatively and systematically assessed what patients deem most relevant in radiology settings. To elucidate factors driving patient satisfaction in outpatient radiology, we derived quantitative models to identify items most predictive of patients’ overall assessment of radiology encounters. Methods Press-Ganey survey data (N = 69,319) collected over a 9-year period at a single institution were retrospectively analyzed, with each item response dichotomized as “favorable” or “unfavorable.” Multiple logistic regression analyses were performed on 18 binarized Likert items to compute odds ratios (OR) for those question items significantly predicting Overall Rating of Care or Likelihood of Recommending. In a secondary analysis to identify topics more relevant to radiology than other encounter types, items significantly more predictive of concordant ratings in radiology compared to non-radiology visits were also identified. Results Among radiology survey respondents, top predictors of Overall Rating and Likelihood of Recommending were items addressing patient concerns or complaints (OR 6.8 and 4.9, respectively) and sensitivity to patient needs (OR 4.7 and 4.5, respectively). When comparing radiology and non-radiology visits, the top items more predictive for radiology included unfavorable responses to helpfulness of registration desk personnel (OR 1.4–1.6), comfort of waiting areas (OR 1.4), and ease of obtaining an appointment at the desired time (OR 1.4). Conclusions Items related to patient-centered empathic communication were the most predictive of favorable overall ratings among radiology outpatients, while underperformance in logistical issues related to registration, scheduling, and waiting areas may have greater adverse impact on radiology than non-radiology encounters. Findings may offer potential targets for future quality improvement efforts.

Introduction Patient-centered care, in which healthcare providers identify and meet patients' needs and preferences, is an increasingly important facet of healthcare. With recent implementation of quality payment programs such as the Merit-Based Incentive Payment System (MIPS), radiology practices and other healthcare entities are increasingly incentivized to demonstrate increased value or better outcomes [1][2][3], including patient satisfaction and metrics related to patient-centered care [1]. Additionally, patient satisfaction may contribute to the attraction and/or retention of patients in radiology [4] and other healthcare settings [5][6][7][8][9]. Given the variety of options available to improve the patient experience in radiology, particularly in challenging situations in MRI or interventional procedures [10][11][12][13][14], an understanding of which components of a patient experience are most crucial to patients' overall satisfaction or willingness to recommend is an important goal in clinical radiology practice.
Existing studies attempting to elucidate the dominant contributors to radiology patient satisfaction based on specific quality improvement initiatives, analyses of complaints and targeted feedback, or post-encounter surveys revealed several themes found important to patients. An early qualitative study of written complaints in radiology determined that failure to provide patient-centered care and poor interpersonal interactions with staff were common themes among submitted complaints [15]. Standardized surveys, such as those developed by Press-Ganey (PG), have also been analyzed to assess various components of patient satisfaction following radiology quality improvement initiatives [16]. Surveys targeting patient communication found that patients are most concerned with receiving adequate information to prepare for their imaging studies [17]. Another study utilizing electric kiosks to survey patients following their visit at a tertiary-care imaging center revealed that wait times, cleanliness, receptionist courtesy, and patient-staff communication were most important [18]. A published observation that short medical history interviews improved patient satisfaction in radiology practice settings implied that patient-centered communication is a major contributor of radiology patient satisfaction [19]. A 2019 European Society of Radiology survey found that patients valued correctness of diagnosis and communication from medical staff the most [20]. Many of the above studies were limited by relatively small sample sizes or a relatively short duration of sampling.
While many existing studies simply report satisfaction scores for various domains or subtopics akin to a quality assessment report, our approach differs in that we seek to quantify the relative importance of each question item on the target outcome variables of interest using a multivariable statistical prediction model. Insights gleaned from a systematic, data-driven approach to identifying top predictors of patient satisfaction outcomes may facilitate prioritization of practice improvements deemed of highest potential yield for a given practice and patient population. In this study, we derive two quantitative models for predicting the two dependent variables of overall subjective ratings and willingness to recommend to others as a function of individual Press-Ganey survey question items. We further quantify the impact of specific topics within two population subgroups that had been observed in a prior publication to have a lower satisfaction with their radiology care (young working-age patients and certain racial minority groups) [21]. Moreover, we test in a secondary analysis the hypothesis that some items are more predictive of the outcomes of interest in radiology vs. non-radiology outpatient visits.

Survey data
In this IRB-approved and HIPAA-compliant study, PG surveys completed over a period of 9 years (2008-2017) at a single institution were retrospectively analyzed. IRB approval was obtained from the Biomedical Sciences Institutional Review Board of the Ohio State University (Columbus, OH), and informed consent was waived due to anonymized analysis of previously acquired data. The PG surveys consisted of multiple Likert items grouped into various sections that asked participants to rank various aspects of their experience following their healthcare visit, such as ease of scheduling an appointment, responsiveness of staff to patient complaints and sensitivity to their needs, skill of the staff, and wait times. Excluding the Overall Assessment section, there were 18 question items distributed among the following four sections: Registration, Facilities, Test or Treatment, and Personal Issues [4]. The items were answered using a 5-point rating scale, with 1 representing a "Very Poor" response and 5 representing a "Very Good" response. To dichotomize survey results, responses of 4-5 were classified as favorable, while responses of 1-3 were classified as unfavorable. Within the Overall Assessment section, 2 items were selected to serve as relevant outcome variables: the patient's subjective overall rating of care (Overall) and the likelihood of recommending a center to others (Recommend). All surveys in the study period were included in the statistical analysis, including those that were partially completed.

Statistics
Missing data in the survey were addressed with multiple imputation (MI) under the missing at random (MAR) assumption, which stipulates that the probabilities of data being missing are independent of the missing values and that the systematic differences between the missing and observed values can be entirely explained by other observed variables [22], including all outcome and predictor variables. The logistic regression analyses were performed separately for five MI instances of the dataset, and final statistical inferences were drawn via combining the results from each MI repetition. Since the data are not distributed normally, a binary logistic regression model was adopted to fit the model, with the binarized Recommend and Overall variables as dependent variables and 18 binarized questionnaire items as predictor variables. Specifically, the variables were selected with a stepwise approach in SAS base 9.4 (SAS Institute Inc., Cary, NC). The variable candidates enter the model based on evaluation of the significance level at 0.20 in each complete dataset. In the second step, all the selected variables were employed in a logistic regression model using the GENMOD procedure in SAS to estimate odds ratios (OR), 95% confidence intervals, and p-values for each predictor variable. Statistically significant results were determined based on p < 0.05, with Bonferroni adjustment for multiple comparisons.
We applied a similar regression model selection approach to identify predictor variables most predictive of favorable outcomes for demographic subsets previously observed to demonstrate significantly lower satisfaction rates following radiology appointments [21]. The two population groups examined were those aged 20-39 (Age) and those of Asian, black, or other/ unknown race (Race). As before, eighteen independent predictor variables and one outcome variable were included for initial variable selection, followed by multiple logistic regression analysis performed to estimate odds ratios, 95% confidence intervals, and p-values for each subgroup.
With the same two-step approach, SAS procedure, and method, the hypothesis that the odds ratios of the positive responses for the Recommend and Overall variables between radiology and non-radiology populations at each level of the binary predictor variables are different from 1 was tested in a logistic regression analysis using the GENMOD procedure, with a SLICE statement for data partitioning. Specifically, the 18 predictor variables and one outcome variable were included for initial selection in the model, followed by multiple logistic regression analysis using the group variable (i.e., radiology or non-radiology), selected individual variables, and interaction terms as predictors in the model. The odds ratio of the Recommend and Overall variables for radiology vs. non-radiology populations at each level of the binary predictor variables was calculated using each imputed dataset, followed by the MIANALYZE procedure to compute summarized odds ratios. From this, the concordance between outcome and predictor variables (i.e., both favorable or both unfavorable) was able to be compared between radiology and non-radiology patients. The 95% confidence intervals and p-values were computed with Bonferroni adjustment for multiple comparisons.

Results
A total of 69,319 survey were analyzed, of which 36,693 were completed by patients following a radiology appointment and 32,626 were completed by patients following a non-radiology appointment. Of the non-radiology surveys, 763 (2.4%) contained unfavorable responses to overall rating of care and 1,049 (3.3%) had unfavorable responses to likelihood of recommending a center to others (Table 1). Of the radiology surveys, 887 (2.4%) had unfavorable responses to overall rating and 1,136 (3.1%) showed unfavorable responses to likelihood of recommending (Table 1). Table 2 lists the counts and percentages of missing responses for radiology and non-radiology visits. The question items are shown in Table 2 as worded in the survey and in the order they appear in the survey. The entries with the highest missing percentages include items related to keeping family informed, responsiveness to concerns/complaints, test/treatment preparation instructions, waiting time in the test/treatment area, and ease of obtaining an appointment. All these question items with high missing frequencies can plausibly be explained by encounters for which these items are "not applicable," such as when no family is present, no specific concerns/complaints exist, no specific preparation is needed, no separate waiting in test/treatment areas occurs, or care is provided on a walk-in basis, respectively. While missing responses occur more frequently with some questions than others or with differences in frequency between radiology and non-radiology visits, our missing at random assumption likely remains valid because there are unlikely to be any systematic differences between missing and observed values, as unanswered question items are mostly likely due to the patient being indifferent in their assessment or deeming the issue to be irrelevant or unimportant.

PLOS ONE
Topics predicting radiology patient satisfaction The logistic regression analysis applied to radiology appointments identified 9 survey items that independently predicted the overall rating of care and 12 survey items that independently predicted likelihood of recommending ( Table 3). The strongest independent predictor for both outcome variables was responsiveness to concerns and complaints made during a visit (OR = 6.79, p = <0.0001 for Overall; OR = 4.94, p <0.0001 for Recommend). The next strongest predictor for both outcomes was sensitivity to the patient's needs (OR = 4.66, p <0.0001] for Overall; OR = 4.46, p <0.0001 for Recommend). Skill level of the staff who provided care to the patient was also a predictor for both outcomes (OR = 4.33, p <0.0001 for Overall; OR = 2.73, p = 0.0005 for Recommend). Other question items that were independent predictor variables for both outcomes addressed wait times in the testing/treatment area, staff communications with patient and family about the test or treatments, helpfulness of registration personnel, friendliness or courtesy of the staff, comfort of the waiting area, and staff explanations regarding the test or treatment (OR range: 1.72-3.58) ( Table 3).
Likelihood to recommend, but not overall rating, was independently predicted by favorable responses to items regarding cleanliness of the facility (OR = 2.07, p = 0.0032), ease of getting an appointment at the desired time (OR = 2.07, p <0.0001), ease of the registration process (OR = 2.15, p = 0.0014), and ease of navigating the facility (OR = 1.96, p <0.0001). Staff concern for patient comfort was a predictor of overall rating but not likelihood of recommending (OR = 2.20, p < 0.0001).
When applied to subgroups of the population that were previously observed to have lower radiology visit satisfaction, the logistic regression analysis identified 5 survey items in the targeted age subgroup and 5 items in the targeted racial minority subgroup that independently predicted the overall rating of care (Table 4). Response to concerns and complaints was a Staff concern for comfort (OR = 3.71, p = 0.012) was predictive for overall rating in the targeted age subgroup but were not predictive in the targeted race subgroup. Skill of nonphysician healthcare staff (OR = 4.23, p = 0.0080 for Race) was predictive in the targeted race subgroup but not in the targeted age subgroup. The logistic regression analysis for the population subgroups identified 4 different survey items in the targeted age subgroup and 3 different items in the targeted race subgroup that independently predicted the likelihood of recommending (Table 4). Sensitivity to patient needs was the strongest predictor for the targeted age subgroup (OR = 9.23, p <0.0001). Response to concerns and complaints was the strongest predictor variable for the targeted race subgroup (OR = 10.77, p <0.0001) and was also a strong predictor for the targeted age subgroup (OR = 5.23, p = 0.0038). Other predictive variables for both groups included explanations given by staff (OR = 6.77, p < 0.0001 for Age; OR = 6.67, p = <0.0001 for Race). The only predictive item in the targeted age subgroup that was not predictive in the targeted race subgroup was waiting time in the testing and treatment area (OR = 4.1, p <0.0001). All predictive items in the targeted race subgroup were also predictive in the targeted age subgroup.
An additional logistic regression analysis was performed to assess whether some survey items are more likely to predict a concordant outcome response for radiology visits compared to non-radiology visits (Table 5). An unfavorable response to helpfulness of registration desk

PLOS ONE
Topics predicting radiology patient satisfaction personnel was more predictive of a concordant negative response to both outcome variables among radiology patients compared to non-radiology patients (OR = 1.55, p = 0.0002 for Overall; OR = 1.42, p = 0.0087 for Recommend). An unfavorable assessment of waiting area comfort was more predictive of a concordant unfavorable response to overall rating among radiology patients compared to non-radiology patients (OR = 1.36, p = 0.0054). Finally, an unfavorable assessment of ease of scheduling appointment was more predictive of a concordant unfavorable response to recommend rating among radiology patients compared to nonradiology patients (OR = 1.38, p = 0.0037).

Discussion
In this large analysis of over tens of thousands of completed satisfaction surveys over a neardecade of practice, we were able to quantify the extent to which specific question items predict a favorable overall rating of care or likelihood of recommending to others. For both these outcome variables, the two most predictive question items address the practice's responsiveness to patient concerns and complaints and its sensitivity to patient needs, illustrating a crucial role of empathic communication in patient experiences. Responsiveness to patient concerns is even more predictive when analyzing population subgroups, such as racial minorities, that had previously been noted to have lower satisfaction metrics in outpatient radiology. Responses to some survey items had a greater apparent impact on predicting a concordant outcome among radiology patients compared to non-radiology patients. For instance, negative responses regarding the helpfulness of the registration desk, ease of getting an appointment, and the comfort of waiting areas were more predictive of unfavorable outcome responses for radiology visits than for non-radiology visits, suggesting these topics are weighed by patients more heavily for radiology encounters than for non-radiology encounters. Historically, topics and issues most important to patients can be identified using qualitative approaches such as focus group sessions, targeted requests for feedback, or using more structured methods such as surveys and questionnaires. However, our approach to employ statistical modelling on a large database of existing survey responses to objectively assess odds ratios for individual question items represents an efficient method for assessing the major contributing issues affecting patient dissatisfaction or satisfaction. Prior investigations utilizing either electronic or paper surveys have reported that patient satisfaction in radiology is strongly influenced by factors related to patient-staff communication [17][18][19][20]. One analysis of patient complaints found a majority were related to failure of patient-centered care [15]. Studies using Press-Ganey satisfaction surveys in orthopedic surgery and orthopedic oncology have also found that interpersonal and communication issues and sensitivity to patient needs were influential factors predicting patient satisfaction and likelihood of recommending a practice [23,24]. The results from our quantitative model, that the most influential predictors of favorable patient experience were responsiveness to patients' concerns or complaints and sensitivity to patient's needs, are concordant with previously published findings that highlight the importance of a patient-centered culture and a need for appropriate communication skills to optimize patient experiences.
Our results highlight the opportunity of existing satisfaction survey instruments to provide more than simply a quality outcome metric. By performing a data-driven analysis quantifying impact of contributors to patients' overall assessment or likelihood of recommending a practice, issues that most impact their experience may be pinpointed. Consequently, items with high odds ratios predicting favorable experiences could be interpreted as high-yield targets for focused quality improvement initiatives. In our study, where the most predictive factors are related to patient-centered empathic communication, there have been reported benefits of staff communication training on patient satisfaction [1]. Quality improvement initiatives may also be selectively applied to specific subgroups of the patient population, allowing for more focused outreach and targeted interventions to improve satisfaction in subgroups of patients with historically suboptimal satisfaction scores. Furthermore, although determinants of patient satisfaction in radiology are likely to overlap with those for non-radiology encounters, the comparison between radiology and non-radiology survey responses with respect to prediction of concordant outcomes discussed in our study could help identify topics that may deserve more attention in radiology, such as logistical items related to registration, scheduling, and waiting areas as identified in our study. This study has several limitations. As a single-institution study, there may be limited generalizability to other institutions, other practice settings, or other patient populations. Nonetheless, it is plausible that patient expectations and considerations in the healthcare setting and consequently the quantitative relationships examined in our study may be quite similar across institutions. Another potential limitation is that the methodological approach to identify independent predictors within the statistical model may not necessarily detect all relevant survey question items if some correlate with other independent predictors. As with any survey-based approach, there is potential for non-response bias among patient populations when utilizing PG survey instruments [25], such that the population responding to surveys in our study may not necessarily reflect the entire radiology patient population. While it is possible that such variables as illness severity could potentially affect a decision to respond to the surveys, the surveys were sent only to outpatients; therefore, the most serious or life-threatening encounters, which typically result in hospital admissions, intensive care unit stays, or emergency department visits, are not included in the surveyed encounters. We did not record other variables that may explain patients' non-participation in the survey, and non-response bias cannot be completely eliminated with survey-based methods, but we anticipate that the impact of such bias on our study's interpretation is minimal.

Conclusions
In summary, our study applied logistic regression to identify PG survey items most predictive of favorable patient responses for overall rating of care and likelihood of recommending. Items related to patient-centered empathic communication were the most predictive of favorable ratings among radiology patients, offering potential high-yield targets for future quality improvement programs. The relative impact of some of these topics is even greater when focusing on specific population subgroups previously demonstrated to have lower satisfaction ratings in outpatient radiology. When contrasting radiology and non-radiology encounters, underperformance in logistical issues related to registration, scheduling, and waiting areas may have greater adverse impact on patient satisfaction for radiology visits than for non-radiology encounters. In a healthcare landscape that is becoming more focused on patient-centered care, identification of drivers of patient satisfaction could become a valuable tool in maximizing both practice reimbursement and patient retention. Further investigations into the drivers of patient satisfaction and the impact of focused quality improvement initiatives in specific radiology settings would be helpful for ongoing efforts to improve the radiology patient experience.