Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Secondary analysis of hospital patient experience scores across England’s National Health Service – How much has improved since 2005?

  • Kate Honeyford ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    k.honeyford@imperial.ac.uk

    Affiliation Dr Foster Unit at Imperial College, London, England

  • Felix Greaves,

    Roles Conceptualization, Writing – review & editing

    Affiliation Department of Primary Care and Public Health, Imperial College, London, England

  • Paul Aylin,

    Roles Conceptualization, Writing – review & editing

    Affiliation Dr Foster Unit at Imperial College, London, England

  • Alex Bottle

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing

    Affiliation Dr Foster Unit at Imperial College, London, England

Abstract

Objective

To examine trends in patient experience and consistency between hospital trusts and settings.

Methods

Observational study of publicly available patient experience surveys of three hospital settings (inpatients (IP), accident and emergency (A&E) and outpatients (OP)) of 130 acute NHS hospital trusts in England between 2004/05 and 2014/15.

Results

Overall patient experience has been good, showing modest improvements over time across the three hospital settings. Individual questions with the biggest improvement across all three settings are cleanliness (IP: +7.1, A&E: +6.5, OP: +4.7) and information about danger signals (IP: +3.8, A&E: +3.9, OP: +4.0). Trust performance has been consistent over time: 71.5% of trusts ranked in the same cluster for more than five years. There is some consistency across settings, especially between outpatients and inpatients. The lowest-scoring questions, regarding information at discharge, are the same in all years and all settings.

Conclusions

The greatest improvement across all three settings has been for cleanliness, which has seen national policies and targets. Information about danger signals and medication side-effects showed least consistency across settings and scores have remained low over time, despite information about danger signals showing a big increase in score. Patient experience of aspects of access and waiting have declined, as has experience of discharge delay, likely reflecting known increases in pressure on England’s NHS.

Introduction

Patient experience is increasingly seen as an important aspect of healthcare, both as an ‘intrinsically important dimension of care quality [1], and stimulus for improvement [2], and the last 20 years have seen a proliferation of national patient experience surveys in many countries[3]. Patient experience scores have shown associations with several outcomes including adherence to medication [4], good clinical process measures [5] and fewer inpatient care complications [6], although some dispute the link’s causality [7].

The National Health Service (NHS) National Patient Survey Programme, administered by Picker Institute Europe, covers patients’ experiences of a range of health provision. The Care Quality Commission (CQC), the national regulator, reports results for each trust [8]. In a Diagnostic Tool [9] published by the Department of Health (DoH) the questions in the three main hospital surveys are partitioned into five key domains (Box 1). The DoH suggest the tool shows how scores vary across NHS healthcare providers for both NHS managers and the general public.

Box 1 Five domains of patient experience

  • Access and waiting (AW) e.g. How long did you wait before you first spoke to a nurse or doctor?
  • Safe; high quality; co-ordinated care (SHQCC) e.g. Did a member of staff tell you about any danger signals regarding your illness or treatment to watch for after you went home?
  • Better Information, More Choice (BIMC) e.g. Were you involved as much as you wanted to be in decisions about your care and treatment?
  • Building closer relationships (BR) e.g. Did doctors or nurses talk in front of you as if you weren't there? Did doctors or nurses talk in front of you as if you weren't there?
  • Clean; friendly; comfortable place to be (CCFP) e.g. Overall; did you feel you were treated with respect and dignity while you were in the hospital?

The domains have similar questions and scoring methodologies across different settings (inpatient (IP), Accident and Emergency (A&E) and outpatient (OP)) and are combined into an Overall Patient Experience Score (OPES). Each domain score is the mean of the scores for question within the domain, the OPES is the mean of the five domain scores. The questions included in the domains are unchanged since the surveys’ inception, making them ideal for looking at trends over time and between settings.

There are now ten continuous years of inpatient patient experience data as well as additional surveys of A&E and outpatient departments, with approximately one million patients responding. Given the importance of patient experience within the NHS, the annual publication of results and the repetition of the questions being asked we would hope scores to improve over time and performance between settings to become more consistent.

A recent report on inpatients [10] highlights the overall positive experience, with ongoing improvements, especially in areas of policy intervention. Negative trends were seen in areas where ‘there are well-recognised pressures in the system’. Trust-level inpatient analysis suggested that the majority of trusts have not improved consistently (a trust can comprise several hospitals) [11]. Analysis of the 2008 and 2009 inpatient, outpatient and A&E surveys using cluster analysis found that 21% of trusts had above-average performance across all surveys and all domains of care, but only 4% of trusts were above- or below-average performance across all three settings (A&E, OP and IP) suggesting that trusts do not perform consistently across settings [12].

Our study extends this recent work by analysing trends over time in three key hospital settings (inpatient, outpatient and A&E), comparing a trust’s performance with other trusts and comparing hospital settings. We aim to determine if:

  1. the patient experience in each setting has changed over time,
  2. trusts have performed consistently over time and,
  3. there is consistency between hospital settings.

Methods

This study utilises publicly available results of NHS patient surveys completed between 2004/05 and 2014/15. Details of sample sizes and response rates are available from the NHS Surveys website; summary tables (S1 Table and S2 Table) are include in supporting information. Inpatient surveys are annual. There have been five A&E and four outpatient surveys since 2003. In 2003 and 2004/05 these were in the same year, but since then they have been in different years. The most recent surveys included in the analysis for A&E and OP patients were 2014/15 and 2011/12 respectively.

Prior to publication, scores are standardised based on age, gender and, for inpatients, route of admission [13].

We included 130 acute hospital trusts with inpatient survey results for the ten-year period 2005/06 to 2014/15. The majority of trusts which had data for some years but not others were trusts which were merged or were newly formed during the period of study. Specialist trusts which generally have a single speciality were excluded; they were the highest scoring trusts for IP surveys in all years.

Initially descriptive analysis of data from NHS England’s Patient Experience Tool [9] determined the patterns in scores over time for overall patient experience, domains and individual questions, for inpatients, outpatients and A&E. Scores in 2005/06 and 2014/15 were also compared. To determine if performance of the highest- and lowest-scoring trusts was consistent over time, the mean score for each trust and domain in the first three years was calculated and the 25% highest-scoring and 25% lowest-scoring trusts were identified. The mean scores for these groups of trusts were then calculated for each year.

To assess performance consistency, trusts’ performances over time and relative to one another were analysed. Trusts were grouped into four using k-means cluster analysis using standardised patient experience scores. Although Ward’s minimum variance hierarchical clustering [14] suggested different numbers of clusters in different years, ranging from four to nine, four was selected as a pragmatic approach. Consistent performance was defined as being in the same ranked cluster for greater than five years.

To assess consistency between hospital settings within hospitals, A&E scores from 2014/15 were compared with the same year’s inpatient scores. Outpatient scores from 2011/12 were separately compared with inpatient scores for the same year. Cluster analysis determined consistency across settings. Trusts were grouped into four clusters based on A&E or outpatient scores and these were compared with clusters based on their inpatient scores. In addition, trusts were divided into quartiles. This was completed for OPES (the overall score), five domains and seven questions with identical wording across the three surveys.

To identify trusts which had improved over time we compared the mean inpatient score for the first three years of the inpatient survey and compared this to the mean score the final three years of the study period. The 10% of trusts which had made the biggest improvements were identified. Similarly, we identified the 10% of trusts for whom their scores had improved the least. The trusts which had improved the most were compared with the trusts which had improved the least in terms of bed numbers, bed occupancy and staffing levels.

Sensitivity analyses were carried out on the impact of the clustering approach on the consistency over time: Ward’s minimum variance hierarchical clustering, k-means using different seeds and a simple quartile approach. There were minor impacts on the results when five outliers were removed, which would not affect conclusions. The outliers were therefore retained in the main analyses. All analysis was done using SAS v9.4.

Results

Trends in patient experience scores over time

Fig 1 shows trends in domain and overall inpatient scores. There has been a steady two-point increase in the Clean, Comfortable, Friendly Place to be (CCFP) domain. Other domains have more erratic patterns, especially Safe, High Quality, Co-ordinated Care (SHQCC). Access and Waiting (AW) scores fell overall but with major fluctuations. Both Better Information, More Choice (BIMC) and Building Relationships (BR) saw relatively small improvements between 2011/12 and 2012/13.

thumbnail
Fig 1. Trends in patient experience scores for inpatients over time, overall (OPES) and by domain.

Mean scores for the 25% lowest scoring trusts and the 25% highest scoring trusts in the first three years are also shown.

https://doi.org/10.1371/journal.pone.0187012.g001

Table 1 summarises changes in inpatient, outpatient and A&E scores over time. Outpatient experience improved across all domains between 2004/05 and 2011/12, especially for AW. Overall A&E scores improved between 2004/05 and 2014/15, with the biggest increase for CCFP. There was a decline in AW scores for A&E patients. Changes in outpatient and A&E scores were greater than for inpatients.

thumbnail
Table 1. Summary of changes over time in scores by department and domain for all acute non-specialist English hospital trusts.

https://doi.org/10.1371/journal.pone.0187012.t001

Individual questions with the biggest improvement are the same across all three settings: cleanliness (IP: +7.1, A&E: +6.5, OP: +4.7) and information about danger signals (IP: +3.8, A&E: +3.9, OP: +4.0). Outpatients saw an improvement in both dimensions of AW, particularly for total waiting time (+8.4). A&E patients reported the biggest improvement in experience of information about medication (both purpose (+3.3) and side-effects (6.7)), as well as pain control (+4.4) and time to discuss health problems (+3.4). A&E patients’ experience of waiting to speak to a nurse or doctor fell the most (-5.6), followed by inpatients’ experience of waiting for a bed (-4.4) and discharge delay (-3.8).

The majority of trusts (67%) improved their overall inpatient score between 2005/06 and 2014/15, although the mean change is less than 1. The BR and CCFP domains had the highest percentages of trusts that improved. Over 50% of trusts have a lower AW score in 2014/15 than 2005/06. The majority of trusts improved across all domains for both outpatient and A&E departments, except the AW domain for A&E. The variance in inpatient scores did not fall over time. There is evidence that the variance in outpatient scores fell between 2004/05 and 2011/12. The lowest- and highest-scoring questions have remained consistent over all the years of the survey in all three settings.

Consistency in trust performance over time–inpatient experience

Consistency in inpatient scores over the ten years was high (Table 2). 71.5% of trusts were in the same ranked cluster for more than five of the ten years for overall scores. There was also high consistency for individual domains.

thumbnail
Table 2. Consistency of trust performance ranking in inpatient experience scores between 2005/06 and 2014/15 for acute non-specialist trusts in England.

https://doi.org/10.1371/journal.pone.0187012.t002

The gap between the lowest- and highest-performing trusts in the initial period narrowed during the first three years, but there was little evidence of the lowest-performing trusts ‘catching up’ after this, except for the CCFP domain (Fig 1).

Consistency in trust performance across settings

Questions regarding waiting and information about medication side-effects and danger signals have been consistently low-scoring in all three settings since the survey inception. High-scoring questions also show consistency over time and across settings and include being treated with respect and dignity and being given sufficient privacy.

Using cluster and quartile analysis to determine performance consistency across settings, approximately 50% of trusts were in the same cluster for A&E or OP and inpatient surveys overall (Table 3). In general, consistency was higher between OP and inpatients than between A&E and inpatients. Consistency was lower for individual domains and individual questions. Cleanliness scores had the highest consistency across settings. Lowest consistency was seen for the lowest-scoring question, receiving information about medication side-effects. Changes over time varied between domains and questions; however, high scores on many questions reduce what variation is possible.

thumbnail
Table 3. Consistency between inpatient and outpatient or A&E patient experience scores for domains and questions that are identically worded in all three surveys.

https://doi.org/10.1371/journal.pone.0187012.t003

Improving trusts

The 10% of trusts which had made the biggest improvements were both low and high performing trust in the 2005/06. There was no evidence of patterns in the geographical location or types of trusts. Two trusts were in the lowest ranked cluster, three were in the highest ranked cluster and the remaining eight trusts were in the middle ranked clusters. Two trusts were also improvers in terms of A&E PE scores, three in terms of OP scores and two in both A&E and OP scores. Table 4 summarises the characteristics of the most and least improving trusts. There is limited evidence that the most improving trusts have a higher number of beds, and a higher number of doctors and consultants per 10 beds; p>0.5 for all comparisons.

thumbnail
Table 4. Mean characteristics of most improving trusts and least improving trusts.

https://doi.org/10.1371/journal.pone.0187012.t004

Discussion

Main findings

During 11 years of national NHS hospital surveys, overall patient experience (PE) has been consistently high in most areas, with minor increases in the majority of scores. PE of access and waiting (AW) for both A&E and inpatients has declined, but outpatient scores have risen. Scores for the ‘Clean; friendly; comfortable place to be’ (CCFP) domain have improved across all three settings (IP:+1.8, A&E: +3.1 and OP:+2.6), mainly attributable to increases in PE of cleanliness. Experience of waiting and information at discharge were low-scoring in all years in all three settings; a similar pattern was seen for high-scoring questions. PE of cleanliness has shown a marked improvement in all three settings, as has information about danger signals. Outpatient scores improved for waiting for an appointment, whilst waiting for a bed and discharge delay has deteriorated for inpatients, as has waiting to speak to a nurse or doctor for A&E patients. Trusts’ performance in comparison with each other has also been consistent over time and between settings. Inspection of the trusts which improved the most and the least did not identify any patterns in area or types of trust. There is some evidence that the most improving trusts had a higher supply of doctors and consultants but not nursing staff.

Strengths and limitations

This study is the most comprehensive summary of detailed, national NHS patient survey data across three settings since its inception. We focused on domains developed by the Department of Health which have included the same questions since the initial surveys. Although other domains have been suggested and there may be challenges with combining questions into domains, the consistency of the questions over time and between trusts means this is a pragmatic approach.

Comparing PE across the three hospital settings has inherent difficulties as patients’ expectations will vary by department, possibly influencing their responses on PE surveys [14, 15].

Cluster analysis assigns trusts to groups based on actual variation in performance, in contrast to dividing trusts into quartiles which are inherently unstable [16]. The consistency measure depends on the method used and the number of clusters selected, but similar trends were seen with a quartile approach.

Excluding trusts which did not have complete data for the 10 years meant excluding trusts which merged or newly formed during the ten years of study. Mean scores of trusts without complete data were typically lower than the mean, but it is not clear why this is the case. A separate study of these trusts would provide information the impact of merging on patient experience of mergers.

Changes in the scores are modest. Whether this is because PE has actually changed little or because the survey instruments are insensitive to real improvements in care cannot be distinguished from the survey results alone. In addition, trusts which are already performing well may find it hard to improve as the majority of patients are also scoring the maximum (ceiling effect). Lastly, for many questions there are only two options if the patient is reasonably happy, with an emphasis on always or often, which may pose another challenge for trusts wishing to improve.

Identifying features of trusts which show the biggest improvements has not revealed clear patterns. Limited trust data was available for this analysis. For example, a review of trust annual reports and websites to identify trust level initiative may reveal common approaches in the most improving trusts. This was beyond the scope of this study.

Implications

Previous analysis of the 2009 inpatient and outpatient and 2008 A&E surveys suggested that 21% trusts consistently performed above or below average with lower levels of consistency between hospital settings [12]. We found higher consistency between settings, which might be due to different domains of care or the number of clusters used. Our research reinforces the finding that trusts perform consistently relative to each other, not just across domains but also over time and between settings.

We found that the big improvements in inpatient cleanliness scores [10, 17] were also seen in the outpatient and A&E surveys. These improvements coincided with national targets and campaigns such as the NHS ‘cleanyourhands’ campaign [18]. It has been suggested that the biggest improvements are in areas of policy intervention such as cleanliness [10, 19]. Information about danger signals has also shown big improvements across all three settings with no evidence of an associated national intervention. These improvements may be due to action following the low scores.

Both the inpatient and outpatient scores improved for waiting time for an appointment, which mirrors a reduction in waiting time between referral and treatment seen in other data [20]. Similarly, A&E patients report a worsening of experience in waiting to speak to a doctor or nurse, reflected in an increase in time to initial assessment since 2011 [20]. The agreement between PE waiting measures and other waiting time data provides good evidence that PE surveys are useful barometers of waiting time performance.

The consistency in low-scoring questions covering “medication side-effects” and “danger signals to look out for” in all surveys suggest that these surveys can inform trusts about areas requiring trust-wide action. This partly counters concerns that trust-wide surveys do not reflect what happens in individual departments.

Possible barriers to using data more effectively have been proposed [10, 21] and include a lack of time, delays in dissemination, the introduction of the Friends and Family Test, scepticism among clinicians and limited understanding of statistical methods. There has been no systematic review of the way in which trusts have used the results, although this need has been highlighted [19], as has the need for systematic guidance on how to use data [22]. One potential use is the evaluation of initiatives such as ‘Hello My Name is…’ [23].

Conclusion

Despite the pressures on the NHS over the last ten years, there is strong evidence that patients’ experiences of hospitals are positive and that they are generally satisfied with the care they receive. Key areas of improvement include the policy-driven improvement in cleanliness scores in all three settings. PE of aspects of access and waiting have declined, as has experience of discharge delay, likely reflecting known increases in pressure on England’s NHS. Information about danger signals and medication side-effects showed least consistency in all settings and scores have remained low over time. The use of patient surveys to improve patient experience and subsequent quality improvement in the NHS need to be developed.

Supporting information

S1 Table. Number of respondents to NHS hospital surveys and response rates (%).

https://doi.org/10.1371/journal.pone.0187012.s001

(DOCX)

S2 Table. Proportion of respondents by sex and age in most recent surveys.

https://doi.org/10.1371/journal.pone.0187012.s002

(DOCX)

References

  1. 1. Price RA, Elliott MN, Zaslavsky AM, hays RD, Lehrman WG, Rybowski L, et al. Examining the Role of Patient Experience Surveys in Measuring Health Care Quality, Med Care Res Rev 2014;71(5):522–54 pmid:25027409
  2. 2. Greaves F, Pape UJ, King D, Darzi A, Majeed A, Wachter RM et al. Associations between internet-based patient ratings and conventional surveys of patient experience in the English NHS: an observational study, BMJ Qual Saf 2012;21(7)600–605. pmid:22523318
  3. 3. Delnoij DMJ. Viewpoints—Measuring patient experiences in Europe: what can we learn from the experiences in the US and England? Eur J Public Health 2009;19(4)354–356. pmid:19620220
  4. 4. Liu Y, Malin JL, Diamant AL, Thind A, Maly RC. Adherence to adjuvant hormone therapy in low-income women with breast cancer: the role of provider-patient communication. Breast Cancer Res Treat 2013;137(3):829–836. pmid:23263740
  5. 5. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. New Eng J Med 2008;359(18):1921–1931. pmid:18971493
  6. 6. Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The relationship between patients' perception of care and measures of hospital quality and safety. Health Services Res 2010;45(4):1024–1040.
  7. 7. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med 2012;172(5):405–411. pmid:22331982
  8. 8. NHS England. About NHS Surveys. www.nhssurveys.org (2015,accessed 22 Oct 2015).
  9. 9. NHS ENGLAND. Overall Patient Experience Scores: Supporting Information. www.england.nhs.uk/statistics/statistical-work-areas/pat-exp/sup-info/ (2015, accessed 30 June 2015).
  10. 10. Raleigh V, Thompson J, Jabbal J, Graham C, Sizmur S, Coulter A. Patients' experience of using hospital services. www.kingsfund.org.uk/publications/patients-experience-using-hospital-services (2015, accessed 18 Dec 2015).
  11. 11. Sullivan P, Harris ML and Bell D. The quality of patient experience of short-stay acute medical admissions: findings of the Adult Inpatient Survey in England. Clin Med 2013;13(6):553–6.
  12. 12. Raleigh VS, Frosini F, Sizmur S, Graham C. Do some trusts deliver a consistently better experience for patients? An analysis of patient experience across acute care surveys in English NHS trusts. BMJ Qual Saf. 2012;21:381–390. pmid:22421913
  13. 13. NHS England Analytical Team. Methods, Reasoning and Scope -Statement of Methodology for the Overall Patient Experience Scores (Statistics). NHS England, 2013.
  14. 14. Coussement K, Demoulin N and Charry K. Marketing Research with SAS Enterprise Guide. 2011. Farnham, Surrey: Gower Publishing Limited. pp97–99.
  15. 15. Bjertnaes OA, Ingeborg SS and Hilde HI. Overall patient satisfaction with hospitals: effects of patient-reported experiences and fulfilment of expectations. BMJ Qual Saf. 2012;21(1):39–46. pmid:21873465
  16. 16. Marshall EC and Speigelhalter DJ. Reliability of league tables of in vitro fertilsation clinics: retrospective analysis of live birth rates. BMJ. 1998;316(7146):1701–4. pmid:9614016
  17. 17. Reeves R and West E. Changes in inpatients’ experiences of hospital care in England over a 12-year period: a secondary analysis of national survey data. J Health Serv Res Policy. 2015;20(3):131–137. pmid:25534393
  18. 18. National Patients Authority. About the cleanyourhands campaign. www.npsa.nhs.uk/cleanyourhands/about-us/ (2011, accessed 4 Dec 2015).
  19. 19. DeCourcy A, West E Barron D. The National Adult Inpatient Survey conducted in the English National Health Service from 2002 to 2009: how have the data been used and what do we know as a result? BMC Health Serv Res 2012;12:71 pmid:22436670
  20. 20. Baker C. Accident and Emergency Statistics—Briefing Paper no.6964. House of Commons Library, 2015.
  21. 21. Reeves R and Seccombe I. Do patient surveys work? The influence of a national survey programme on local quality-improvement initiatives. Qual Saf Health Care. 2008;17:437–441. pmid:19064659
  22. 22. Haugum M, Danielsen K, Iverson HH. The use of data from national and other large-scale user experience surveys in local quality work: a systematic review. Intl J Qual Health C 2014 http://dx.doi.org/10.1093/intqhc/mzu077
  23. 23. NHS England. Compassion in care hits new milestone. www.england.nhs.uk/2015/02/hellomynameis/ (2015, accessed 20 May 2015)