Introduction

Hospital-acquired infections (HAI) with highly resistant microorganisms are a substantial problem in the intensive care unit (ICU). Highly resistant organisms evolve and spread predominantly in the hospital, particularly the ICU, under intense selection pressure from antibiotic use [1], although some eventually diffuse into the community [2, 3]. Infection with highly resistant organisms is associated with increased ICU-related mortality [4]. Cost increases are large, as highly resistant microorganisms require systematic change to more expensive empiric therapy for all potentially infected patients as well as more intensive treatment for infected patients [5]. The incidence of infection and colonization by these organisms has risen steadily [6].

Transmission of a highly resistant organism within ICUs is often the focus of infection control efforts [710]. However, highly resistant organisms can also spread between hospitals via interhospital patient transfers. The most notable recent example was the iatrogenic spread of severe acute respiratory syndrome (SARS) in Toronto [11]. Several other examples have been documented [1214]. Patients may act as vectors if they are colonized by highly resistant organisms even when not actively infected [15, 16]. In contrast to these cases of documented spread across hospitals, most theoretical work on the spread of highly resistant organisms has focused on spread within a single hospital or population [1720], although some account for local community interactions [21, 22]. The interactions between multiple hospitals change not only transmission dynamics but also perversely weaken incentives for infection control within hospitals [23].

This study tested the hypothesis that the observed transfer patterns of critically ill patients could, in principle, distribute a highly resistant microorganism throughout the USA. We then evaluated the relative value of uniform infection control efforts versus selected targeting of hospitals for infection control on the basis of their position in the transfer network. We considered both a national perspective and a single-state perspective given the diverse decision-makers with stakes in ICU infections.

Methods

Study design and data

We performed a simulation study of interhospital ICU transfers as a vector for the spread of highly resistant microorganisms. We used observed data for nationwide transfer patterns from the USA in 2005 in Medicare. We compared alternative approaches to placing infection control resources. Our key outcome variable was the number of critical care beds exposed to the highly resistant microorganism.

In order to simulate spread over the actual patterns of transfer of patients in the USA, we used the final action claims from the 2005 Medicare Provider Analysis and Review (MedPAR) file [24]. For the primary analyses, we examined all claims for patients from the 50 United States, between September 2004 and September 2005. Detailed analyses of this transfer network have been previously published; direct hospital-to-hospital transfers were examined, where both hospital stays involved critical care use [25]. Additional details are in the electronic supplementary material (ESM). We examined transfers in the 3,306 hospitals that transferred among each other. Highly resistant microorganisms could be spread by patients who are colonized or infected, so we did not distinguish between them. We examined only transfers of admitted patients, and excluded patients discharged home between hospitals stays.

For each hospital, its total number of critical care beds was extracted from the Medicare Healthcare Cost Report Information System [26]. For the few hospitals (<5% per year) missing data, this was imputed on the basis of the nationwide ratio of the number of Medicare critical care patients to critical care beds.

This study was approved by the University of Michigan Institutional Review Board as HUM00023637.

Simulating infection spread and control

Our general approach to the simulation was as follows, with greater detail in the ESM. First, we selected a hospital at random as the source of spread, proportional to its number of beds. Patients were transferred from each hospital proportional to observed transfer patterns. When a colonized patient arrived at a receiving hospital, that hospital acquired the highly resistant microorganism with a probability inversely proportional to that receiving hospital’s investment in infection control. We simulated two different levels of infectivity. In the maximal infectivity condition, the probability of transmission at any given transfer was 1 in the absence of infection control. In the moderate infectivity condition, the probability of transmission at any given transfer was 0.1 in the absence of infection control. We followed the spread of highly resistant microorganisms across hospitals over time, and under different infection control strategies.

Our primary outcome variable was the total number of critical care beds exposed to highly resistant microorganisms under different infection control strategies. For the maximal infectivity condition we examined 1 year; for the moderate infectivity condition we examined 5-year follow-up. As infection control resources are costly and therefore limited, we compared four approaches to allocating scarce infection control resources (Table 1). We used a t test to compare the mean number of exposed critical care beds over all simulations between the varying infection control strategies.

Table 1 Approaches to allocating infection control resources

We modeled the impact of infection control on infection spread by allocating arbitrary “units” of infection control leading to a 25% reduction in hospital-to-hospital transmission probability. If a hospital received more than 1 unit of infection control, the reductions were multiplicative. Thus a hospital with 1 unit would have a 75% chance of becoming colonized from any given transfer; a hospital with 2 units would have a (75%) × (75%) = 56.25% chance of becoming colonized from a given transfer. This approach captures the diminishing marginal utility of infection control. These simplifying assumptions were based on the review of a wide range of published studies of existing infection control techniques [2733]. In the national analysis we allocated 500 total units of infection control.

Formally, transmission rates from hospital i to hospital j on a daily basis in the maximal infectivity condition were

$$p_{ij} = \frac{{t_{ij} }} {{365}} \times 0.75^{R_j }$$

where t ij was the total number of transfers in a year from hospital i to hospital j, R j was the number of infection control units allocated, and 365 was the number of days of observation in the Medicare data. Once a hospital becomes “infected,” we modeled outgoing transfers as HAI carriers beginning the next day. For the moderate infectivity condition we multiplied the daily transmission probability by 0.1, introduced a 7-day delay before an infected hospital’s outgoing patients became infective, and replicated all analyses.

All models were coded in Perl. We simulated the independent spread from each hospital at least 10 times, providing over 33,000 simulation runs for each test condition.

Sensitivity analyses

We performed several sensitivity analyses designed to assess the robustness of the results to our assumptions and data. We replicated our analysis using 1,000 infection control units instead of 500. We also replicated our primary analyses in other years of Medicare data, 1998–2005. We replicated our analyses using all-payer data from the state of Pennsylvania, considering transfers only within Pennsylvania. Finally, we considered the policy-relevant situation in which infection control resources are allocated on the basis of the network observed using 1998 data, but transmission occurs at a later point, in 2005—when transfer patterns have changed, but infection control resources have not been reallocated.

Results

We analyzed all transfers of critically ill patients among 3,306 hospitals in the USA in Medicare in 2005. The hospitals reported a median of 13 critical care beds in 2005, with an interquartile range from 7 to 26; 7 hospitals reported over 150 critical care beds. There were 64,760 total critical care beds in the hospitals which transfer among each other.

The network was deeply interconnected: 99.1% of hospitals sent patients out to other hospitals and therefore could initiate the spread of highly resistant microorganisms; 27.4% of hospitals only transferred patients out, and so could initiate the spread of highly resistant microorganisms but could not receive it. Because of the high interconnectedness of the network, approximately 65% of the hospitals could receive highly resistant microorganisms from any starting hospital.

To characterize the potential rate of spread, we examine the time it would take for a highly resistant organism to spread between any two randomly selected hospitals. Under maximal infectivity, it would take a median of just over 3 years for an infection to spread between any two hospitals in the network using only critical care transfers. (That is, spread would occur between half of randomly selected hospital pairs in less than 3 years, and in half of randomly selected hospital pairs in more than 3 years.) Under only moderate infectivity, an infection could spread from the most central hospital to any other hospital in the entire country within a median of approximately 21.5 years.

Different allocations of infection control resources (see Table 1) lead to marked differences in the extent and rate of spread (Figs. 1, 2). Under a random allocation of resources, a mean of 3,475 critical care beds were exposed to the highly resistant microorganisms at the end of 1 year (SD 3,319) under maximal infectivity. If resources were allocated using the degree-centrality approach, 2,099 beds (SD 2,048) were exposed. Allocating resources using the betweenness-centrality approach yielded 2,023 exposed beds (SD 2,056). The greedy approach limited spread to 944 beds (SD 836) within 1 year. (All differences P < 0.001.) Further, the greedy algorithm resulted in more robust infection control regimes (Fig. 1); the most widespread diffusion of the highly resistant microorganisms under the greedy algorithm was much lower than the worst-case scenarios for other approaches. Very similar patterns were obtained after 5 years of spread under the moderate infectivity condition. Any network-aware algorithm for resource allocation was more effective than random allocation, and the greedy algorithm significantly outperformed degree- and betweenness-based allocations.

Fig. 1
figure 1

Differences in spread of highly resistant microorganisms under different allocation strategies under maximal infectivity. The box runs from the 25th percentile to the 75th percentile, with a bold line at the median. The whiskers show 1.5 times the interquartile range or the observed minimum, and the outliers beyond that range are plotted as dots [46]. Closer attention to the means is shown in Fig. 2

Fig. 2
figure 2

Temporal differences in spread of highly resistant microorganisms under different allocation strategies under maximal infectivity. The x-axis is days since the development of the resistant organism at the first hospital

The greedy algorithm identified a small number of hospitals as key to preventing wide dissemination of highly resistant microorganisms (Fig. 3) under both the maximal and moderate infectivity conditions. For example, the greedy algorithm allocated all 500 resources to only 96 hospitals under the maximal infectivity condition; the geographic distribution of these hospitals is shown in Fig. 4. Eighteen resources were allocated to the most central hospital—equivalent to reducing transmission through that hospital 177-fold.

Fig. 3
figure 3

Variations across hospitals in infection control resources under greedy allocation of 500 resources

Fig. 4
figure 4

Geographic distribution of infection control resources under greedy algorithm. Hospitals are colored relative to the number of infection control resources they were allocated—gray for none, then a spectrum of blue (few) to red (most). Hospital marker size is proportional to number of beds

Targeted allocation of infection control resources was much more efficient than universally mandating that all hospitals engage in the same infection control strategy. If distributing infection control resources uniformly across all hospitals, it would be necessary to distribute over 8,000 units in order to achieve the same control as 500 units targeted using the greedy algorithm under maximal infectivity.

Sensitivity analyses

Our results were robust to sensitivity analyses in which we varied the source of data (all-payer vs. Medicare), the scale of the simulations (one state vs. whole USA), the time lag between when the control resources were placed versus when transmission might occur, the functional form of diminishing marginal utility of infection control, and the total number of infection control resources provided. Results are in the ESM.

Discussion

The spread of highly resistant microorganisms capable of causing HAI is emerging as a key problem for critical care practitioners. In this study we demonstrate that interhospital transfer patients could play an important role in the nationwide spread of highly resistant microorganisms from ICU to ICU. Infection control efforts to prevent such spread can be made much more efficient by selectively targeting hospitals most important for transmission. Indeed, concentrating intensive infection control resources at a small number of hospitals can be 16 times more effective than distributing the same resources uniformly across all hospitals with regard to stopping interhospital spread. Our findings were similar in a national sample of US Medicare patients and a state-wide all-payer database, and were robust to varying assumptions about rate of spread of the organisms, the quantity of infection control resources, and changes to the transfer network over time, between 1998 and 2005.

Our simulation data suggest that the increasing prevalence of highly resistant organisms across hospitals may be caused not only by selection pressures within hospitals, but also by interhospital spread of highly resistant HAI between hospitals. The data suggest that detected multihospital outbreaks of Staphylococcus [13, 14], Klebsiella [12], Acinetobacter [34, 35], and coronavirus [11] may not be isolated incidents. This may happen via interhospital transfer of not only critical care patients, as we have studied here, but also via patients from nursing homes or with brief times between hospital admissions [3]. Our simulation data further suggest that the time course of spread via transfers might be of the same order of magnitude as the widespread development of resistance due to antibiotic overuse. Spread within smaller regions or areas with more intensive transfer links would be expected to be faster, as was elegantly shown in an independent study of the Netherlands [36].

When a central hospital does a particularly effective job of limiting spread of a highly resistant microorganism, this benefits not only the central hospital but also all the hospitals that receive transfers from that central hospital [23]. Indeed, these benefits extend to several degrees of separation—infection control at the central hospital benefits hospitals that receive transfers from those hospitals that receive transfers from the central hospital, and so on. These network interdependencies might be termed “network transmission externalities.” Despite such interdependencies, the existing structure of hospital epidemiology primarily focuses on infection control within a given hospital system. This organizational focus neglects the potential role of interhospital transfers and may be under-resourced to alter the nationwide spread of highly resistant organisms.

This work has practical implications for how we allocate infection control resources. From a practical perspective, this work strongly argues for regional coordination of infection control resources. There is no reason to think that outbreaks of highly resistant organisms are contained to a single institution. For many ICUs, their own antibiotic stewardship programs simply cannot eliminate the risk of highly resistant organisms unless the program is coordinated with those of sending hospitals, or the risk of spread from transferred patients is held to a low level [23]. Further, many hospitals may be as likely to acquire highly resistant organisms from transfers as they are to evolve highly resistant infections from their own antibiotic use. The optimal balance of resources between preventing endogenous development and spread of highly resistant organisms needs to be carefully weighed for each hospital. Both sources—not an exclusive focus on one or the other—require attention.

In the economics literature, the presence of externalities makes a prima facie case for regulatory intervention unless the involved organizations are able to coordinate a response themselves [37]. The precise nature of any intervention needs to be carefully considered—and there is less theoretical consensus—but what is clear is that independent uncoordinated action is unlikely to be optimal. There is already anecdotal evidence of a graded response. Many large academic medical centers employ teams of hospital epidemiologists substantially larger than those of community hospitals. Our results also suggest, but certainly do not prove, that policies of intensive early surveillance of all transferred patients might be particularly appropriate [38]. At particularly central hospitals, presumptive isolation of all transfers might be considered. The most cost-effective way to expand infection control at any given hospital is likely to vary—our results argue for some coordination between hospitals in deciding where to expand.

Our results also have theoretical implications for how we think about diffusion within a network [39]. Prior approaches in human contact networks have focused on limiting infection spread by removing individual nodes from the network entirely via targeted immunization. These approaches have targeted individuals on the basis of degree [40] and betweenness [41]. In contrast, our study shows that a greedy allocation based on transfer rates further improves outcomes over degree- and betweenness-based allocation, and that where it is not possible to render a node completely immune, it is of benefit to allocate resources unevenly among targeted nodes.

Our results have several limitations. From a practical perspective, we have simulated the potential spread of a highly resistant HAI across a real network. Although we have marshaled anecdotal evidence that this sort of transmission has occurred at least between several hospitals, we have not yet demonstrated that it has actually occurred on a nationwide scale. We argue that our results demonstrate the feasibility of such spread and argue for surveillance for such a possibility. Second, we have studied the USA; parallel studies in other regions would be scientifically very productive [36]. Third, although we have examined several credible approaches to allocating infection control resources, we have not proven that the greedy algorithm is theoretically optimal—we have only demonstrated its superiority to other tested algorithms [42, 43]. Fourth, this exploration has focused on allocation when coordination is possible—it is not clear which approach is most robust if other hospitals can be expected to deviate from the allocation plan. Fifth, a full model—of substantially greater complexity—might account for transmission via brief community stays and non-ICU transfers; others have begun this important work at the scale of one county, rather than examining nationwide interdependence [3]. Finally, we have made simplifying assumptions about the nature of infection control allocation in the setting of scarce health-care resources, and the ability to add additional resources for similar marginal cost; further, the high degree of transmission spread suggested by the model at the most central hospitals may not be achievable in the real world. Translating these necessary model assumptions into direct policy recommendations must be done with care.

Highly resistant HAIs are likely to grow in importance over the coming years. New and virulent strains of bacteria and viruses, including the recent H1N1 influenza pandemic, make this problem particularly important [44]. We demonstrate that interhospital transfers of critically ill patients might form a vector for national-scale transmission of highly resistant microorganisms. Furthermore, we suggest that coordination between hospitals in allocating infection control resources could result in substantially decreased interhospital transmissibility with substantially lower total cost. The frequent transfer of patients between our ICUs results in substantial nationwide interdependence, and acknowledging and managing that interdependence may be important to public health and security.