Elsevier

Health Policy

Volume 89, Issue 2, February 2009, Pages 193-200
Health Policy

From cost-effectiveness information to decision-making on liquid-based cytology: Mind the gap

https://doi.org/10.1016/j.healthpol.2008.06.001Get rights and content

Abstract

Objective

This paper explores the policy process involved in the production of cost-effectiveness information in the context of both national and local policy-making requirements. We use the decision to implement a new technology for cervical cancer screening (liquid-based cytology) in England as a case study.

Methods

The analysis traces the initial decision by the National Institute of Health and Clinical Excellence to commission further research before implementing this new technology, the economic data produced as a result, the final decision nationally and the implications for decision-makers locally.

Results

The paper highlights a number of reasons why there may be a gap between the evidence produced by a cost-effectiveness analysis and the information needs of the decision-maker. For example there are difficulties in estimating whether savings in staff time are realisable. In addition, even after a technology has been deemed cost-effective and is recommended for national implementation, further questions remain at the local level, including identifying the most cost-effective way to implement a technology, and selecting the best supplier.

Conclusion

In order to make cost-effective implementation decisions, local decision-makers require economic data in addition to that required for the national recommendation, and this deserves recognition and further research.

Introduction

A number of qualitative studies have suggested that there is a gap between policy makers need for cost-effectiveness information and the data provided by cost-effectiveness analyses [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12]. For example, decision-makers have criticised economic evaluations for identifying cost savings that are hard to realise in practice, such as reductions in the average time taken by nurse practitioners to perform specific actions [2], [4]. Studies have also found decision-makers to be particularly critical of economic evaluations based on modelling due to their lack of transparency and the large number of assumptions used [3], [6], [8], [9], and have also expressed concern about the quality of clinical evidence used in cost-effectiveness analyses [5], [10], [2].

Research has shown that there are also organisational reasons why cost-effectiveness results are difficult to implement: for example, difficulties in transferring savings between budgets [2], [3], [4] and decision-makers’ lack of training in health economics [5], [6]. With the establishment of health technology appraisal institutions in several countries, such as the National Institute of Health and Clinical Excellence (NICE) in UK, there has been more recent evidence that some of the barriers in using cost-effectiveness data may be diminishing at the national level [7].

Conversely, the evidence suggests that the picture on the use of economic evaluation at the local level may be quite different. For example, a number of studies that have tracked the implementation of NICE guidelines have shown that actual implementation of interventions that have been recommended by NICE has been highly variable [13], [14]. A recent survey of local decision-making committees in England found that cost-impact, as opposed to cost-effectiveness, was of greater concern locally; and that local committees tended to focus on ‘how’, as opposed to ‘whether’, new treatments should be introduced [15].

The objective of this paper is to explore why there may be a gap between the cost-effectiveness data provided by health economists, and the information required by policy makers on costs and effects in order to make, or at least inform, their decisions at both the national and local level. We use liquid-based cytology as a case study. We consider how the cost-effectiveness evidence was generated and why there may be problems with estimates. We also identify a number of further questions for cost-effectiveness that are important locally but were not fully addressed, such as: which is the most cost-effective supplier? what is the real cost-impact and what is the most cost-effective way to implement technology change?

There is a much broader literature, spanning sociology to organisational management, as to why new healthcare technologies may – or may not – be adopted regardless of cost-effectiveness, such as ‘opinion leaders’ views’ and the use of ‘tacit’ as well as ‘explicit’ knowledge [16], [17], [18]. Whilst these factors undoubtedly play a role in decision-making, the purpose of this paper is to highlight that even when policy makers are using explicit information such as cost-effectiveness, there may be gaps and misinterpretation between the cost-effectiveness data provided by health economists and the type of information that policy makers require.

The paper is structured as follows. Firstly, the policy context behind the initial decision to set up a series of pilot sites and collect cost-effectiveness information are outlined. Then the economic data presented as part of this evaluation is reviewed. Following this, the role of health economics in the resulting national and local policy decision is examined, and potential limitations of the cost-effectiveness modelling work assessed. Finally the related policy implications are discussed.

Section snippets

Initial NICE review and decision to set up the pilot sites

In 2000, NICE considered the use of liquid-based cytology (a semi-automated technique) as an alternative to conventional cytology to screen for cervical cancer. This decision was informed by the work of Payne et al. [19], which considered the additional consumable and equipment costs related to liquid-based cytology, but not the staff time involved in collecting and processing samples. As there is no long-term evidence from randomised trials about the relative effectiveness of liquid-based

Interim report of the pilot site evaluation

The pilot site evaluation was planned to take place over two years; however, there was just over one year to obtain data from the pilot sites in time to inform the NICE re-appraisal of LBC. It was therefore decided to provide an interim report [21] in order to provide data to inform the NICE appraisal, followed by a final report at the end of the evaluation period which included the evaluation of combined LBC and HPV triage [22].

Second re-appraisal by NICE and resulting decision

Following the completion of the first report of the English pilot studies and a further health technology assessment report [23], the recommendations from the second evaluation of LBC by NICE were as follows [24]:

“It is recommended that liquid-based cytology is used as the primary means of processing samples in the cervical screening programme in England and Wales.”

Costs and cost-effectiveness results from the pilot sites were quoted in the guidance including the costs of screening and the

Implementation of the NICE decision locally

The Secretary of State for Health has directed that the NHS is required to provide funding and resources for all medicines and treatments recommended by NICE through its technology appraisals work programme. In the case of LBC the normal 3-month implementation period for NICE guidance was waived as it was expected that it will take up to 5 years to complete the roll-out of LBC [24].

Following the NICE decision the Department of Health produced a circular for Executives in the NHS in England

Extent of LBC adoption

Following the NICE decision in 2003, it was determined that LBC should be implemented locally over a 5-year period. By the end of March 2007, 85% of laboratories in England had either converted to liquid-based cytology or had firm plans to do so [28]. Compared with earlier studies that had assessed the extent that other NICE recommendations have been implemented this shows quite a high level of compliance with the NICE recommendations [13], [14].

The high level of compliance may be specific to

Conclusion

In addition to theoretical complexities, health economists have to confront a range of practical difficulties when generating robust estimates of costs and cost-effectiveness. These include whether staff savings are realisable, and being reliant on effectiveness data such as diagnostic accuracy estimates that may be subject to bias. To that extent, some of the concerns expressed in the literature by decision-makers about economic evaluations may be well founded. In addition, economic

References (38)

  • B. Motheral et al.

    Role of pharmacoeconomics in drug benefit decision-making: results of a survey

    Formulary

    (2000)
  • B.S. Bloom

    Use of formal benefit/cost evaluations in health system decision making

    American Journal of Managed Care

    (2004)
  • H. Weatherly et al.

    Using evidence in the development of local health policies

    International Journal of Technology Assessment in Health Care

    (2002)
  • Asim O. The use of results from economic evaluation in applied decison-making in the UK Health Service. PhD Thesis,...
  • T.A. Sheldon et al.

    What's the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients’ notes, and interviews

    British Medical Journal

    (2004)
  • ABACUS International. NICE guidance implementation tracking data source, methodology and results; 2005....
  • I. Williams et al.

    The use of economic evaluations in NHS decision-making: a review and empirical investigation

    Health Technology Assessment

    (2008)
  • T. Greenhalgh et al.

    Diffusion of inovations in health service organisations: A systematic literature review

    (2005)
  • M. Polanyi

    The Tacit Dimension

    (1962)
  • View full text