Scolaris Content Display Scolaris Content Display

Cochrane Database of Systematic Reviews Protocol - Intervention

The effect of consolidating acute‐care surgery on patient, staff, and resource outcomes

This is not the most recent version

Collapse all Expand all

Abstract

This is a protocol for a Cochrane Review (Intervention). The objectives are as follows:

The proposed review will address the following questions: 

What impact does consolidating surgical services through an "acute care surgery" model have on patients, staff, and resources?

  • What are the impacts of each of four (or more) variants of the acute care surgery model?

  • What effects, if any, are consistently found across different variants of the model?

Background

Description of the condition

Access to emergency or urgent surgery has become a serious concern in North America as well as Australasia (Bhagvan 2009; Division of Advocacy and Health Policy 2006; Russell 2007). It is widely believed that a major part of the problem is a dearth of general surgeons (Sanchez 2009; Sheldon 2007). The past few decades have seen substantial growth in subspecialization, meaning that fewer residents choose general surgery as a career, and fewer surgeons feel qualified to perform general acute‐care surgery (Dawson 2004; Gough 2008). Potential contributors to this trend (e.g. reimbursement patterns, the impact of on‐call responsibilities on lifestyle, medicolegal issues, declining prestige of general surgery, lack of educational opportunities for residents, etc.) may vary by country, as may the overall supply of surgeon‐hours relative to patient demand (Borman 2008; Russell 2008). One factor that healthcare administrators can influence is the organization of surgical services. There is growing interest in the “acute care surgery” (ACS) model, in which designated resources (typically including the time of designated surgeon(s)) are set aside for emergency or urgent surgery (Diaz 2007; Maa 2007). Such consolidation of surgical services has been reported to improve both patient access and outcomes, bringing shorter wait times and decreased length of stay in emergency departments, reduced complications, and increased patient and staff satisfaction (Earley 2006; Maa 2007; Parasyn 2009). However, the literature in this area has not been systematically reviewed. 

Interest in this issue is also evident in Europe, where there is currently no unified approach to acute care surgery (Leppäniemi 2008; Uranues 2008). The availability of general surgeons does not appear to be a major concern in Europe; indeed, there has been a move away from subspecialization and towards “common trunk” training. However, some regard the ACS model as a promising way to reduce interference between emergency and elective patient streams, preventing one type of surgery from delaying the other. In the UK, local efforts to reorganize surgical services have been driven by the need to cope with an increasing volume of emergency admissions while meeting national targets for both emergency and elective wait times (Sorelli 2008). Dedicated facilities (e.g. ward, operating room) or teams for ACS have been reported in several other countries, but may be limited to a small number of centres (Uranues 2008). Uranues 2008 suggested that in general, European decision‐makers are waiting to see the results of American initiatives before embarking on any major reorganization.

Description of the intervention

There is currently no single, accepted ACS model. At present, the “model” is best conceptualized as a family of approaches in which acute‐care surgery is grouped together and separated from elective surgery by person and/or place. Preliminary exploration of the literature suggests at least four distinct interventions, which can be implemented separately or in various combinations.

  • Creating a new role, sometimes called the surgical hospitalist, responsible for only non‐elective surgery while on call (e.g. Maa 2007). This intervention, often introduced in the context of a shortage of surgeons for the ED, is typically intended to improve efficiency by freeing surgeons from juggling emergency and elective cases.

  • Integrating or expanding existing specialties to create a single program responsible for general acute‐care or emergency surgery. In particular, this might involve expanding a trauma service to include emergency general surgery (e.g. Pryor 2004). Whereas this intervention may arrive at the same model as the “new role” does, the direction of change is reversed (i.e., from greater to lesser specialization). One of this intervention’s key objectives is to increase trauma surgeons’ operative experience.

  • Reserving a certain place within a hospital for emergency surgery; for instance, creating a designated emergency operating theatre (e.g. Calder 1998). This intervention typically aims to reduce out‐of‐hours operating, which is viewed as unsafe for patients and stressful for physicians.

  • Directing acute care surgery to certain hospitals within a multi‐hospital region (e.g. Hamilton 1997). Regionalization has been pursued by some Canadian health regions in order to make better use of scarce resources.

Within the EPOC taxonomy, the ACS model can be classified broadly as an organizational intervention at the structural level.

The ACS model is distinct from the creation of trauma systems/centres, an approach that focuses on different patients (specialized rather than heterogeneous population) and different priority outcomes (survival/surgical outcomes rather than access/efficiency; see Celso 2006). It is also distinct from the “ring‐fencing” of certain types of elective surgery (see Kjekshus 2005): ring‐fencing involves creating a specific procedure to manage predictable, low‐complexity cases, whereas the ACS model involves a different set of strategies to improve the management of unpredictable, potentially complex cases. We will not include trauma systems/centres or ring‐fencing in this review.

Objectives

The proposed review will address the following questions: 

What impact does consolidating surgical services through an "acute care surgery" model have on patients, staff, and resources?

  • What are the impacts of each of four (or more) variants of the acute care surgery model?

  • What effects, if any, are consistently found across different variants of the model?

Methods

Criteria for considering studies for this review

Types of studies

This review will include randomized controlled trials (RCTs), controlled clinical trials (CCTs), controlled before‐after (CBA) and interrupted time series (ITS) designs. As surgical consolidation is a major, hospital‐ or system‐wide intervention, we anticipate that few if any RCTs will exist, with most of the evidence coming from ITS or CBA studies. In keeping with EPOC criteria, ITS studies must include at least three points before and three after the intervention; we will not include simple pre‐post designs. CBA studies must include at least two intervention groups and at least two comparable control groups, with contemporaneous data collection. In addition, the review will exclude studies (or outcomes) that feature only a “post” measure and no comparator, studies that do not directly test an ACS model (e.g. those investigating which tasks can be performed by various types of surgeons), or process/implementation evaluations without a quantitative outcome evaluation. We will not include qualitative studies of staff or patient perspectives in this review, but we will flag these for potential use in a future review.

Both published and unpublished sources, written in any year and in any language, are eligible for inclusion.

Types of participants

Patients. The focus will be on patients (of any age) who require non‐elective surgery. Studies may include all such patients or only a subset thereof (e.g. patients with appendicitis). We will also include studies that consider the ACS model's effects on other segments of the surgical patient population (e.g. elective).  Only studies of actual patients are eligible, not simulation studies.

Providers. We will consider program impacts on any healthcare professional, including but not limited to surgeons, other physicians, residents, nurses, and allied health professionals.

Settings. We will include studies that take place in hospitals or multi‐hospital systems. While we suspect that this topic is most likely to have been studied in developed countries, studies conducted in any country are eligible for inclusion.

Types of interventions

In order to include all potentially relevant studies, this review will consider any model in which all emergency or non‐elective surgery is assigned to designated person(s) or place(s). This includes the new role, trauma‐service expansion, dedicated emergency OR, and regionalization interventions described above, or any combination thereof. Should other relevant models be identified in the course of the literature review, we will include these also. On the other hand, we will not include the creation of a trauma system or centre and the ring‐fencing of elective surgery. Whereas a dedicated emergency OR is an eligible intervention, this review is not designed to assess models of scheduling emergency surgery in general (e.g. blocking).  Although the new role for acute‐care surgery is sometimes called the “surgical hospitalist,” we note that this review does not concern hospitalist physicians who are not surgeons (nor their interactions with surgeons who are not hospitalists). 

Depending on the variant of the ACS model, eligible interventions may consolidate both non‐trauma and trauma surgery, or may be limited to non‐trauma surgery. We will not include interventions that separate out only one type of surgery (e.g., through a centre of excellence) ‐ although, as noted above, we will include studies that measure outcomes for only one type of patient. 

Types of comparators

This review will include studies that compare one of the above interventions to any time period or group of patients (or providers) without the intervention, including historical controls. Groups of patients without the intervention may be in the same or different hospital(s) and region(s), but must have the same condition or a similar mix of conditions. Although we will not pre‐specify ways in which control hospital(s) or region(s) must be similar to intervention hospital(s) or region(s), we will make note of any differences (e.g. type of hospital, rural/urban setting, geographic area, staff mix, etc.) and address these as part of risk‐of‐bias assessment. Time periods without the intervention may be either prior or alternating. We will also include studies that compare one ACS model to another.

Types of outcome measures

Primary outcomes

The primary outcomes are access to non‐elective surgery and surgical outcomes.

The preferred measure of access is wait time to non‐elective surgery (i.e. time from when patients present to when they receive surgery). Other acceptable measures of access include ED length of stay, percent of patients seen or treated within benchmark time, and wait time for patients requiring urgent surgery who do not present at the ED (i.e. time from when the surgical consult is sent to when the patient receives surgery).

The preferred measures of surgical outcomes will be mortality and complications for either all or selected type(s) of non‐elective surgery. For non‐trauma surgery patients, mortality is an unusual outcome, and seldom the focus of intervention; however, it is highly relevant for trauma patients who may be affected by the implementation of an ACS model. Inpatient length of stay is another acceptable measure of outcomes, but it is an indirect measure and therefore will be interpreted with caution.

Secondary outcomes

Secondary outcomes include objective measures of the following (if the measure is a survey, it must be of known reliability and validity):

  • number of elective surgeries “bumped” or cancelled;

  • staff satisfaction (as measured by a survey of surgeons, residents, and/or other personnel, with the “post” survey undertaken at least six months after implementation of the model);

  • patient satisfaction/acceptance (measured by a survey, as above);

  • educational opportunities for residents (i.e. volume and variety of surgeries to which residents are exposed);

  • timing of surgery (i.e. day, evening, or night);

  • staff workload (volume of patients, cases, or hours worked by surgeons, nurses, or other staff);

  • financial outcomes (actual costs and revenues; full economic analysis is out of scope for Cochrane Reviews, which focus on effectiveness);

  • any unintended impacts and harms (measured quantitatively); these may include impacts on patients outside the scope of the ACS model.

Search methods for identification of studies

Electronic searches

Electronic Sources

We will search MEDLINE (OVID), EMBASE (OVID), CINAHL (EBSCO), and Dissertation Abstracts for potentially relevant sources. We have provided the proposed search strategy, which includes both MeSH keywords and free‐text operators, below. We will also undertake a narrower Google search for evaluation reports, and browse websites that may be sources of relevant grey literature (including the websites of national and international surgical organizations and other sites identified through CADTH’s “Grey Matters” checklist). Additionally, we will search the Cochrane EPOC Group Specialised Register (See SPECIALISED REGISTER under GROUP DETAILS). (See Appendices 1 and 2 for MEDLINE and Google search strategies.)

Searching other resources

We will attempt to identify additional as available by:

  • canvassing review team members’ personal collections and contacts;

  • e‐mailing all corresponding authors who have published an eligible article in the past five years, to ask whether they also know of unpublished studies;

  • examining the reference lists of included studies and any relevant reviews identified during the search.

Data collection and analysis

Selection of studies

We will use the inclusion/exclusion criteria described above to develop a screening guide, which we will pilot to ensure that the criteria are clear to, and can be consistently applied by all screeners. Two team members (SK and ES or PB) will independently screen each record; we will retrieve all reports that we deem potentially relevant.  At least two authors will screen each article; we will resolve disagreements by consensus, and where necessary, in consultation with other members of the team. In the event that a member of the review team is an author of a potentially eligible report, we will not involve this person in its screening, assessment, or data extraction.

The author who reads all of the reports (SK) will seek to identify any duplicate publications or sister studies. We will combine reports with the same intervention and participants and analyze them as a single study. We will analyze reports that concern different participants separately, but we will compare information about the intervention, timeframe, and context across such reports, and note any discrepancies.

Data extraction and management

We will develop a data extraction template, following a modified version of the EPOC data collection checklist, and the full team will review before use. Two team members will independently extract data from each report (SK and one other; studies will be randomly divided among team members). We will extract the following types of data.

  • General information about the report (year, country, author, any duplicate or sister publications)

  • Description of the context (e.g. type of hospital or system, stated reason(s) for introducing the intervention, including any theory behind model selection)

  • Description of the intervention (components; which model(s) it corresponds to, if any)

  • Classification of the intervention according to the EPOC taxonomy

  • Description of any co‐interventions or concomitant organizational changes

  • Information about study design, methods, and quality (as per EPOC Data Collection Checklist)

  • Information about the patient and staff populations involved (as per EPOC Data Collection Checklist)

  • Information about the comparator(s)

  • Information about outcomes (what specifically was measured, when it was measured, and appropriate measures of effect size and variability)

For both data extraction and risk of bias assessment (see below), we will resolve factual differences between the two authors’ accounts by re‐checking the source. We will resolve any differences of interpretation by consensus, and where needed, in consultation with other team members, the contact editor, or both.

Assessment of risk of bias in included studies

Using the EPOC Risk of Bias Guidelines for experimental and quasi‐experimental studies, two reviewers will independently assess risk of bias while they are doing data extraction.  

Apart from ensuring that study designs meet the EPOC criteria for inclusion, we will not exclude any studies on the basis of methodological quality. We will provide a Summary of Findings table specific to the risk‐of‐bias assessment, including at least five columns: design issues (e.g. randomization lacking/improper); participant issues (e.g. uncontrolled or unknown differences in patient populations between groups or time periods); data quality issues (e.g. missing data, un‐blinded assessment); analysis issues (inappropriate analysis that could not be redone); and confounding events (e.g. other changes occurring during an ITS study). We will consider risk of bias when exploring heterogeneity (see below), and will undertake sensitivity analysis, if suitable, excluding studies with high risk of bias.

Measures of treatment effect

We will report outcomes in natural units. For between‐groups analyses, we will present the mean difference for continuous measures (using standardized mean difference if the outcome is not measured in comparable units, which might occur for some of the secondary outcomes), and odds ratio for dichotomous measures. 

For ITS analyses, we will present effect sizes of both the change in level after the intervention, and change in slope of the regression line.  Level effects will be assessed at 12 months, 24 months, and 36 months. If the report features an inappropriate analysis, or no analysis, we will reanalyze the data where possible. In a short ITS design (fewer than 20 points before the intervention), the analysis would control for the linear effect of time and, where possible, for seasonality; in a long ITS design, it would also control for autocorrelation. We will annotate any studies that we have reanalyzed.

For all non‐randomized studies, we will also record any adjusted measures of intervention effect, and what variables were controlled to obtain them. Some potential confounders that might be controlled include patient age, sex, type of condition, and disease or injury severity; and other organizational changes or interventions implemented during the study period. If there is more than one adjusted measure, we will record the one that incorporates the greatest number of variables listed in the preceding sentence.

Unit of analysis issues

Some cluster‐randomized trials or CBA studies may mistakenly treat the individual as the unit of analysis, without taking into account that assignment to treatment conditions was done at the group level. Under such conditions, we will use the intraclass correlation coefficient (if reported or available from the authors) to re‐estimate P values and confidence intervals. If this information is not available, we will note that the study contained a unit of analysis error, and suppress the P value and confidence interval. (Note that we are not planning a meta‐analysis, so it will not be necessary to calculate the effective sample size.)

Dealing with missing data

We will contact authors by e‐mail where information in their report(s) is incomplete or unclear.

Where particular statistics are not reported, we will calculate standard deviations, confidence intervals, and P values from other available statistics, and odds ratios from the numerator and denominator provided. If a study does not report the numerator and denominator for binary outcomes, but only percentages, we will impute the most plausible numerator and denominator from the exact percentage values reported. (For instance, a value that rounds to 12.3% would be possible with some denominators but not others.) We will not use any other forms of imputation. We will convert graphical results to numbers by measuring the distances on the graph (see Rotter 2010). We will identify any results that we have reanalyzed.

Where outcome data are missing for some participants, we will use an available case analysis, and conduct sensitivity analysis as needed (e.g. for a dichotomous variable, by assuming that missing participants had the same rate of events as control group participants). In the course of risk of bias analysis, we will flag studies with non‐trivial (more than 5%) or unbalanced rates of missing data.

Assessment of heterogeneity

If there are at least three studies that measure a primary outcome in a comparable way, we will inspect a forest plot to gain a sense of the heterogeneity of estimates. Some predicted sources of heterogeneity include type of model, type of study design, and base rate of the outcome (i.e. how much room there is for improvement), hospital or system characteristics (e.g. teaching versus community hospital), and risk of bias.

Assessment of reporting biases

We will attempt to mitigate the effects of publication bias through efforts to locate unpublished studies and evaluations. If there are at least 10 studies that report on a primary outcome, we will inspect a funnel plot for evidence of potential publication bias. Since non‐randomized studies typically do not have a registered protocol (and may not have a protocol at all), it will be difficult to assess selective reporting. However, we will note (a) whether any outcomes are mentioned in the methods section but not the results section, or vice versa; and (b) whether any accompanying information (e.g. the authors reply in a discussion) reveals that certain analyses were not reported.

Data synthesis

We will analyze each of the four (or more) ACS models separately in the first instance, as the outcomes of one model may not be generalizable to another. We will categorize studies in terms of the model they most closely reflect; this decision may depend both on the intervention itself and on the focus of the study (e.g. participants, comparator). For example, an intervention to create a new acute‐care surgery program that incorporates an existing trauma service might be categorized as “new role” if the study focuses on how the presence of a designated surgeon affects non‐trauma patients, but as “expanding a trauma service” if the study focuses on how trauma patients fare when their surgeons take on additional responsibilities.  We will analyze randomized studies, if any, separately from non‐randomized studies.

We will present characteristics of included studies and a summary of findings table format (along with a list of excluded studies). As this topic area does not lend itself to RCTs, we do not anticipate carrying out a meta‐analysis. Where more than one study reports on the same outcome, measured in the same way, for the same model, we will present the median and range of effect sizes as an alternative to “vote‐counting.” We will synthesize findings narratively, noting the overall pattern of findings (within and across the model types) and quality of evidence (taking into account the risk‐of‐bias assessment). We will identify the intervention components and other factors (e.g. hospital characteristics) that appear to be consistently associated with positive or negative outcomes, while noting that such observations must be treated as hypothesis‐generating, not hypothesis‐confirming. In the event that we find few or no studies with eligible designs, the review will provide some discussion of the characteristics and findings of studies with ineligible designs. The conclusion will assess the extent to which there is adequate evidence for or against the acute care model(s), and identify areas requiring future research.