Statistical Design of REACT (Rapid Early Action for Coronary Treatment), a Multisite Community Trial with ContinualData Collection

https://doi.org/10.1016/S0197-2456(98)00014-2Get rights and content

Abstract

Unusual problems in statistical design were faced by Rapid Early Action for Coronary Treatment (REACT), a multisite trial testing a community intervention to reduce the delay between onset of symptoms of acute myocardial infarction (MI) and patients’ arrival at a hospital emergency department. In 20 pair-matched U.S. communities, hospital staff members recorded delay time throughout a 4-month baseline period and the subsequent 18-month intervention period, during which one randomly selected community of each pair received a campaign of public and professional education. To exploit the continual nature of its data-collection protocol, REACT estimated the trend of delay time separately in each community by linear regression, adjusting for age, sex, and history of MI, and compared the ten adjusted slopes from intervention communities with those from control communities by a paired t-test. Power calculations based on the analytical model showed that with K = 600–800 cases per community, REACT would have 80% power to demonstrate a differential reduction of 30 min in mean delay time between intervention and control communities, as well as effects on a variety of secondary outcomes. Sensitivity analysis confirmed that the number of communities was optimal within constraints of funding and that the detectable effect depended weakly on the effectiveness of matching but strongly on K, helping the investigators set operational priorities. The methodologic strategy developed for REACT should prove useful in the design of similar trials in the future.

Introduction

Rapid Early Action for Coronary Treatment (REACT), a multisite community trial sponsored by the National Heart, Lung, and Blood Institute (NHLBI), was designed to test an intervention aimed at reducing the delay between onset of symptoms of acute myocardial infarction (MI) and patients’ arrival at a hospital emergency department, on the premise that prompt treatment of acute MI would enhance the effectiveness of thrombolytic therapy 1, 2, 3. REACT was conducted in 20 communities in the United States, pair-matched by region, size, and demographics. REACT randomly assigned one community from each pair to receive the intervention, which consisted of a multifaceted campaign directed at health professionals, patients, and the public, encouraging recognition of symptoms, quick response, and use of 911 emergency services. The study team trained hospital staff members to record delay time for every patient presenting with chest pain during the 4-month baseline period (December 1995 through March 1996) and the 18-month intervention period (April 1996 through September 1997). REACT staff collected additional data from the medical chart, including demographics and secondary endpoints. At the end of the trial, REACT examined the data for differences in delay-time trend between the intervention community and the control community of each pair. The background, behavioral-science rationale, intervention design, and field methods of REACT are detailed elsewhere 1, 2.

REACT resembled other community trials in that it assigned a “treatment” to whole communities such as cities, suburbs, or counties, whereas the primary endpoint was an attribute of individual patients 4, 5, 6, 7, 8. This hierarchical data structure imposed some well-known special requirements for design and analysis 9, 10, 11, 12, 13, 14, 15. What made REACT unique among community trials was that its endpoint events occurred continually throughout the study, rather than at discrete evaluation times before and after intervention, so that REACT had more the structure of a surveillance study than that of a traditional cohort-based clinical trial. The availability of time-specific observations allowed REACT to account for time more precisely in its analytic model than is possible in a typical community trial.

In this paper we describe the statistical methods that we developed to take advantage of the unusual design of REACT and to address its special challenges. Under the heading “Methods” we present a comprehensive analytic model for the study, formulated so as to provide a consistent basis for parameter estimation, hypothesis testing, and calculations of detectable effect. Under “Results” we examine the sensitivity of the detectable effect to various design parameters and describe how, at a critical juncture of the study, the model served to adjust eligibility criteria and ensure adequate case accrual. Under “Discussion” we compare REACT with other community trials and justify its methodologic choices.

Section snippets

Analytical Model

The primary endpoint in REACT was delay time, denoted T and defined as the time elapsed between the onset of acute symptoms and the patient’s arrival at the hospital emergency department. Preliminary data and previous studies indicated that the distribution of T would be right-skewed [16], making y = log10T approximately Gaussian. Assuming y ∼ Nμ, σ2, T would have median M = 10μ with approximately 95% of delay times falling in the multiplicative range [M/R, M × R], where R = 10.

The simplest

Design Curves

In Figure 2 (upper panel) we display the primary design curve for REACT, tracing the detectable reduction in delay time (ΔT, Eq. 7) as a function of cases per community (Kj). We chose a conservative set of community variance parameters for this important curve: zero pair correlation (ρ = 0) and moderate slope variance (Rβ = 1.2).2 We based the delay-time distribution parameters on pilot data and published literature (M = 2.5 hours, R

Discussion

In this section we shall discuss and justify our methodoligic choices in light of the similarities and differences between REACT and other trials.

Acknowledgements

In addition to listed authors, the following REACT principal and co-principal investigators and NHLBI project officers contributed to the work reported here: James Raczynski and Carol Cornell, University of Alabama at Birmingham; Robert Goldberg and Jane Zapka, University of Massachusetts; Russell Luepker and John Finnegan, University of Minnesota; Milton Nichaman and Alfred McAlister, University of Texas; Mickey Eisenberg, Jerris Hedges, and Hendrika Meischke, University of Washington, Oregon

References (27)

  • T.A. Pearson et al.

    Cardiopulmonary community prevention trialsdirections for public health practice, policy, and research

    Ann Epidemiol

    (1997)
  • M.H. Gail et al.

    Aspects of statistical design for the Community Intervention Trial for Smoking Cessation (COMMIT)

    Controlled Clin Trials

    (1993)
  • D.M. Murray

    Design and Analysis of Group-Randomized Trials

    (1998)
  • Cited by (0)

    View full text