Estimating population average treatment effects from experiments with noncompliance

Randomized control trials (RCTs) are the gold standard for estimating causal effects, but often use samples that are non-representative of the actual population of interest. We propose a reweighting method for estimating population average treatment effects in settings with noncompliance. Simulations show the proposed compliance-adjusted population estimator outperforms its unadjusted counterpart when compliance is relatively low and can be predicted by observed covariates. We apply the method to evaluate the effect of Medicaid coverage on health care use for a target population of adults who may benefit from expansions to the Medicaid program. We draw RCT data from the Oregon Health Insurance Experiment, where less than one-third of those randomly selected to receive Medicaid benefits actually enrolled.


Introduction
Randomized control trials (RCTs) are the gold standard for estimating the causal effect of a treatment. An RCT may give unbiased estimates of sample average treatment effects, but external validity is an issue when RCT participants are unrepresentative of the actual population of interest. For example, participants in an RCT in which individuals volunteer to sign up for health insurance may be in poorer health at baseline than the overall population.
External validity is particularly relevant to policymakers who require information on how the treatment effect would generalize to the broader population.
A new research frontier in causal inference focuses on developing methods for extrapolating RCT results to a population [1][2][3]. Existing approaches to this problem are based in settings where there is full compliance with treatment; however, noncompliance is a prevalent issue in RCTs. Noncompliance occurs when individuals who are assigned to the treatment group do not accept the treatment. For individuals assigned to control, we are unable to observe who would have complied had they been assigned treatment. Noncompliance biases the intention-to-treat (ITT) estimate of the effect of treatment assignment toward zero.
We propose a reweighting method for estimating complier-average causal effects for the target population from RCT data with noncompliance, and refer to this estimator as the Population Average Treatment Effect on Treated Compliers (PATT-C). We model compliance in the RCT in order to predict the likely compliers in the RCT control group. Assuming that the response surface is the same for compliers in the RCT and population members who received treatment, we then predict the response surface for all RCT compliers and use the predicted values from the response surface model to estimate the potential outcomes of population members who received treatment, given their covariates.
Our approach for estimating PATT-C differs from previous reweighting methods because we only need to estimate the potential outcomes for RCT compliers and we cannot observe who in the control group would have complied had they been assigned treatment. In Stuart et al. [2], for example, a propensity score model is used to predict participation in the RCT, given pretreatment covariates common to both the RCT and population data. Individuals in the RCT and population are then weighted according to the inverse of the estimated propensity score. Hartman et al. [3] propose a method of reweighting the responses of individuals in an RCT according to the covariate distribution of the population. Reweighting methods typically leverage exchangeability of potential outcomes between the covariateadjusted treated and controls in the RCT. In our approach, the potential outcomes between the complier treated and complier controls are not exchangeable by design, since we need to assume we know the compliance model.
When estimating the average causal effect for compliers from an RCT, researchers typically scale the estimated ITT effect by the compliance rate, assuming that there is only single crossover from treatment to control. 1 When extrapolating RCT results to a population, one might simply reweight the ITT effect according to the covariate distribution of the population and then divide by the proportion of treated compliers in the population in order to yield a population average effect of treatment on treated compliers. However, we do not observe the population compliance rate, and it is likely to differ across subgroups based on pretreatment covariates. By explicitly modeling compliance, our approach allows researchers to decompose population estimates by covariate group, which is useful for policymakers in evaluating the efficacy of policy interventions for subgroups of interest in a population.
We apply the PATT-C estimator to measure the effect of Medicaid coverage on health care use for a target population of adults who may benefit from government-backed expansions to the Medicaid program. We are particularly interested in measuring the effect of Medicaid on emergency room (ER) use because it is the main delivery system through which the uninsured receive health care, and the uninsured could potentially receive higher quality health care through primary care visits. An important policy question is whether Medicaid expansions will decrease ER utilization and increase primary care visits by the previously 1 Alternative methods for compliance-adjusting sample estimates include estimating sharp bounds to the ITT effect in the presence of noncompliance [4,5]; adjustment for treatment noncompliance using principal stratification [6,7]; and maximum-likelihood and Bayesian inferential methods [8].
uninsured. We draw RCT data from a large-scale health insurance experiment where less than one-third of those randomly selected to receive Medicaid benefits actually enrolled.
The paper proceeds as follows: Section 2 presents the proposed estimator and the necessary assumptions for its identifiability; Section 3 describes the estimation procedure; Section 4 reports the estimator's performance in simulations; Section 5 uses the estimator to identify the effect of extending Medicaid coverage to the low-income adult population in the U.S.; Section 6 discusses the results and offers direction for future research.

Estimator
We are interested in using the results of an RCT to estimate complier-average causal effects for a target population. Compliance with treatment in the population is not assigned at random. It may depend on unobserved variables, thus confounding the effect of treatment received on the outcome of interest. RCTs are needed to isolate the effect of treatment received.
Ideally, we would take the results of an RCT and reweight the sample such that the reweighted covariates match the those in the population. In practice, one rarely knows the true covariate distribution in the target population. Instead, we consider data from an observational study in which participants are representative of the target population. The proposed estimator combines RCT and observational data to overcome these issues.

Assumptions
Let Y isd be the potential outcome for individual i in group s and treatment received d. Let S i ∈ {0, 1} denote the sample assignment, where s = 0 is the population and s = 1 is the RCT. Let T i ∈ {0, 1} indicate treatment assignment and D i ∈ {0, 1} indicate whether treatment was actually received. Treatment is assigned at random in the RCT, so we observe both D i and T i when S i = 1. For compliers in the RCT, D i = T i .
The absence of T i in the subscript of the potential outcomes notation implicitly assumes the exclusion restriction for noncompliers, which precludes treatment assignment from having an effect on the potential outcomes of noncompliers in the RCT [9]. Also implicit in our notation is the assumption of no interference between units, which prevents the potential outcomes for any unit from varying with treatments assigned to other units [10].
Let W i be individual i's observable pretreatment covariates that are related to the sample selection mechanism for membership in the RCT, treatment assignment in the population, and complier status. Let C i be an indicator for individual i's compliance with treatment, which is only observable for individuals in the RCT treatment group. In the population, we suppose that treatment is made available to individuals based on their covariates W i . 2 Individuals with T i = 0 do not receive treatment, while those with T i = 1 may decide whether or not to accept treatment. For individuals in the population, we only observe D i -not T i . Assumption 1. Consistency under parallel studies: Assumption (1) requires that each individual i has the same response to treatment received, whether i is in the RCT or not. Compliance status C i is not a factor in this assumption because we assume that compliance is conditionally independent of sample and treatment assignment for all individuals with covariates W i .

Assumption 2.
Conditional independence of compliance and sample and treatment assignment: 2 We use the same W i across all identifying assumptions, which implicitly assumes that the observable covariates that determine sample selection also determine population treatment assignment and complier status.
which is useful when predicting the probability of compliance as a function of covariates W i .
Together, Assumptions (1) and (2) ensure that potential outcomes do not differ based on sample assignment or receipt of treatment.
Assumption (3) ensures the potential outcomes for treatment are independent of sample assignment for individuals with the same covariates W i and treatment assignment.
Assumption 3. Strong ignorability of sample assignment for treated: We make a similar assumption for the potential outcomes under control.
Assumption 4. Strong ignorability of sample assignment for controls: Note that Assumptions (3) and (4) imply strong ignorability of sample assignment for treated and control noncompliers since Assumption (2) states that compliance is also independent of sample and treatment assignment, conditional on W i . However, we are interested only on modeling the response surfaces for compliers.
Restrictive exclusion criteria in RCTs can result in a sample covariate distribution that differs substantially from the population covariate distribution, thereby reducing the external validity of RCTs [11]. High rates of exclusion pose a threat to Assumptions (3) and (4) if exclusion increases the likelihood that there are unobserved differences between the RCT and target population that are correlated with potential outcomes. For example, the RCT in our application required enrolled participants to recertify their eligibility status every six months during the study period. The exclusion of participants who failed to recertify because their household income exceeded a given cutoff threatens strong ignorability if the factors that contributed to the failure to recertify are correlated with unobservables that are also correlated with potential outcomes. While we cannot directly test the strong ignorability assumptions, bias arising from violations of these assumptions would cause the placebo tests described in Section 5.4 to fail.
We include an additional assumption to identify the average causal effect for compliers, as in Angrist et al. [12].
Assumption 5. One-sided noncompliance: Assumption (5) ensures that noncompliance is one-sided; i.e., individuals assigned to control are not allowed to receive treatment. It explicitly rules out the existence of individuals who always receive treatment (i.e, never-takers) and those who receive the opposite of their treatment assignment (i.e., defiers). Figure 1 shows the assumptions needed to identify PATT-C in the form of a causal diagram [13]. The missing arrow between T i (or S i ) and C i signifies that treatment (or sample) assignment may only depend on compliance status through covariates W i , by Assumption (2). Likewise, the missing arrow between Y isd and T i (or S i ) signifies that potential outcomes may only depend on treatment (or sample) assignment through covariates, by Assumptions Confounding arcs represent potential back-door paths that contain only unobserved variables. The existence of back-door paths from D i to Y isd through S i and C i imply that the average causal effect of D i on Y isd cannot be identified by just conditioning on W i ; that is, we cannot ignore the role of S i and C i . From the internal validity standpoint, the role of W i is critical: in the presence of unobserved confounders, there is a back-door pathway from T i back to W i and into Y isd .

PATT-C
PATT-C is the average causal effect of taking up treatment estimated on individuals who received treatment in the population. This interpretation follows directly from Assumptions (1) and (2), which ensure that potential outcomes do not differ based on sample assignment or receipt of treatment. It is written as follows: Theorem (1) relates the treatment effect in the RCT to the treatment effect in the population (proof given in Appendix A).
where E 01 [E(· | . . . , W i )] denotes the expectation with respect to the distribution of W i for population members who received treatment.

Estimation procedure
There are two challenges in turning Theorem (1) into an estimator of τ PATT-C in practice.
First, we must estimate the inner expectation over potential outcomes of compliers in the RCT. In the empirical example, we use an ensemble of algorithms to estimate the response surface for compliers in the RCT, given their covariates. Thus, the first term in the expression for τ PATT-C is estimated by the weighted average of points on the response surface, evaluated for each treated population member's potential outcome under treatment. The second term is estimated by the weighted average of points on the response surface, evaluated for each treated population member's potential outcome under control.
The second challenge is that we cannot observe which individuals are included in the estimation of the second term. In the RCT control group, C i is unobservable, as they always receive no treatment (D i = 0). We must estimate the second term of Eq. (2) by predicting who in the control group would be a complier had they been assigned to treatment. Explicitly modeling compliance allows us to decompose PATT-C estimates by subgroup according to covariates common to both RCT and observational datasets. This approach also accounts for settings where the compliance rate differs between the sample and population, as well as across subgroups.
The procedure for estimating τ PATT-C using Theorem (1) is as follows: S.1 Using the group assigned to treatment in the RCT (S i = 1, T i = 1), train a model (or an ensemble of models) to predict the probability of compliance as a function of S.2 Using the model from S.1, predict who in the RCT assigned to control would have complied to treatment had they been assigned to the treatment group. 3

S.3
For the observed compliers assigned to treatment and predicted compliers assigned to control, train a model to predict the response using S.4 For all individuals who received treatment in the population (S i = 0, D i = 1), estimate their potential outcomes using the model from S.
Observe that Assumptions (3) and (4) are particularly important for estimating τ PATT-C : the success of the proposed estimator hinges on the assumption that the response surface is the same for compliers in the RCT and target population. If this does not hold, then the potential outcomes Y i10 and Y i11 for target population individuals cannot be estimated using the model from S.3. Section 5.3 discusses whether the strong ignorability assumptions are plausible in the empirical application.

Modeling assumptions
In addition to the identification assumptions, we require additional modeling assumptions for the estimation procedure. As pointed out in Section 2.1, we require that W i is complete because if any relevant elements of W i are not controlled, then there is a backdoor pathway from T i back to W i and into Y isd . Additionally, we assume that the compliance model is accurate in predicting compliance in the training sample of RCT participants assigned to treatment and also generalizable to RCT participants assigned to control (S.1 and S.2). We describe below the method of evaluating the generalizability of the compliance model.

Ensemble method
In the empirical application, we use the super learner weighted ensemble method [15,16] for the estimation steps S.1 and S.3. The super learner combines algorithms with a convex combination of weights based on minimizing cross-validated error, and typically outperforms single algorithms selected by cross-validation.
We choose a variety of candidate algorithms to construct the ensemble, with a preference for algorithms that tend to perform well in supervised classification tasks. We also have a preference for algorithms that have a built-in variable selection property. The idea is that we input the same W i and each candidate algorithm selects the most important covariates for predicting compliance status or potential outcomes. 4 We select three types of candidate algorithms: additive regression models [17]; gradient boosted regression [18]; L1 or L2regularized linear models (i.e., Lasso or ridge regression, respectively) [19]; and ensembles of decision trees (i.e., random forests) [20]. Lasso is particularly attractive because it tends to shrink all but one of the coefficients of correlated covariates to zero.

Simulations
We conduct a simulation study comparing the performance of the PATT-C estimator against its unadjusted analogue, which we refer to as the Population Average Treatment Effect on the Treated (PATT): Eq. (3) identifies the population-average causal effect of treatment assignment, adjusted according to the covariate distribution of population members who received treatment. PATT is estimated following the same estimation procedure as PATT-C, except that in estimation step S.3 the response curve is estimated on all RCT participants, regardless of compliance status, conditional on their covariates and actual treatment received. Identical to S.4 in the estimation procedure for PATT-C, we then use the response model to estimate the outcomes of population members who received treatment given their covariates. Like the PATT-C estimator, the PATT estimator crucially relies on the assumption that the response surface is the same for RCT participants and population members who received treatment.
We compare the population estimators against the sample Complier Average Causal Effect (CACE) [21], which is commonly referred to the Local Average Treatment Effect (LATE) in the econometrics literature [12,22]. In the context of program evaluation, it is more relevant than the ITT estimator because only RCT participants who received treatment would have their outcomes affected by treatment in the presence of a nonnegative treatment effect.
CACE is defined as the average causal effect of treatment received restricted to sample compliers: In other words, CACE is the treatment effect for RCT participants who would comply regardless of treatment assignment. However, we are unable to observe the compliance status of RCT participants assigned to control because we do not know if they would have complied if they had been assigned to treatment. A generalization of the instrumental variables estimator of the CACE in the presence of noncompliers is given by: which is identified under Assumption (5). The estimator is equivalent to scaling the ITT effect by the sample proportion of treated compliers [23].

Simulation design
The simulation is designed so that the effect of treatment is heterogeneous and depends on covariates which are different in the RCT and target population. The design satisfies the conditional independence assumptions in Figure 1.
In the simulation, RCT eligibility, complier status, and treatment assignment in the population depend on multivariate normal covariates ( and covariances Cov( The first three covariates are observed by the researcher and W 4 i is unobserved. U i , V i , R i , and Q i are standard normal error terms.
The equation for selection into the RCT is The parameter e 2 varies the fraction of the population eligible for the RCT and e 4 varies the degree of confounding with sample selection. We set the constants g 1 , g 2 , and g 3 to be 0.5, 0.25, and 0.75, respectively.
Complier status is determined by where e 3 varies the fraction of compliers in the population, and e 5 varies the degree of confounding with treatment assignment. We set the constants h 2 and h 3 to 0.5.
For individuals in the population (S i = 0), treatment is assigned by where e 1 varies the fraction eligible for treatment in the population and e 6 varies the degree of confounding with sample selection. We set the constants f 1 and f 2 to 0.25 and 0.75, respectively. For individuals in the RCT (S 1 = 1), treatment assignment T i is a sample from a Bernoulli distribution with probability p = 0.5.
Finally, the response is determined by where we set a, c 1 , c 3 , and d to 1 and c 2 to 2. The treatment effect b is heterogeneous: We generate a population of 30,000 individuals and randomly sample 5,000. Those among the 5,000 who are eligible for the RCT (S i = 1) are selected. Similarly, we sample 5,000 individuals from the population and select those who are not eligible for the RCT (S i = 0) to be our observational study participants. 5 We set each individual's treatment received D i according to their treatment assignment and complier status and observe their responses Y isd . In this design, the manner in which S i , T i , D i , C i , and Y isd are simulated ensures that Assumptions (1) -(5) hold.
In the assigned-treatment RCT group (S i = 1, T i = 1), we train a gradient boosted regression on the covariates to predict who in the control group (S i = 1, T i = 0) would comply with treatment (C i = 1), which is unobservable. These individuals would have complied had they been assigned to the treatment group. For this group of observed compliers to treatment and predicted compliers from the control group of the RCT, we estimate the response surface by training another gradient boosted regression on features (W 1 i , W 2 i , W 3 i ) and D i .

Simulation results
We    PATT outperforms the PATT-C only at a 90% compliance rate, and the CACE outperforms the PATT when the population compliance rate is at 60% or below. In the Appendix, Figures A1, A2, and A3 plot the relationships between estimation error and the degrees of confounding in the mechanisms that determine compliance, treatment assignment, and sample selection, respectively. The estimation error of PATT-C is comparatively less invariant to increases in the degree of confounding in the three mechanisms compared to its unadjusted counterpart. The estimation error of CACE is generally more variable than that of the population estimators due to CACE's inability to account for differences between the sample and target population. We also include indicator variables on household size because lottery selection was random conditional on the number of household members. All analyses in the current application cluster-adjust standard errors at the household level because treatment occurs at the household level. The analyses also use survey weights to adjust for the probability of being sampled and non-response.
The response data originate from a mail survey containing questions about health insurance and health care use. The response variables measure health care use in terms of the number of ER and primary care (i.e., outpatient) visits in the past six months. Following Finkelstein et al. [24], indicator variables for survey wave and interactions with household size indicators are also included as predictors in the response and complier models because the proportion of treated participants varies across the survey waves.

Observational data
We acquire data on the target population from the National Health Interview Study (NHIS) [32] for the period 2008 to 2017. 6 We restrict the sample to respondents with income below 138% of the FPL and who are uninsured or on Medicaid and select covariates on respondent characteristics that match the OHIE pretreatment covariates. We use a recoded variable that indicates whether respondents are on Medicaid as an analogue to the OHIE compliance measure. The outcomes of interest from the NHIS are based on questions that are virtually identical to the OHIE mail survey questions, except that the utilization questions in the NHIS are asked with a 12 month rather than a 6 month look-back period. Following Finkelstein et al. [24], we resolve this discrepancy by halving the NHIS responses in order to make them comparable to the OHIE outcomes.

Verifying assumptions
We verify the identification assumptions needed to identify τ PATT-C prior to conducting placebo tests in Section 5.4 and reporting the results in Section 5.5.

Consistency
Assumption (1) ensures that potential outcomes for participants in the target population would be identical to their outcomes in the RCT if they had been randomly assigned their observed treatment. In the empirical application, Medicaid coverage for uninsured individuals was applied in the same manner in the RCT as it is in the population. Differences in potential outcomes due to sample selection might arise, however, if there are differences in the mail surveys used to elicit health care use responses between the RCT and the nonrandomized study.

Conditional independence
Assumption (2) is violated if assignment to treatment influences the compliance status of individuals with the same covariates. The compliance ensemble can accurately classify compliance status for 78% of treated RCT participants with only the number of household members, survey wave (and the interaction between these indicators and household size indicators), and pretreatment covariates -and not treatment assignment -as predictors. 7 This gives evidence in favor of the conditional independence assumption.

Strong ignorability
We cannot directly test Assumptions (3) and (4), which state that potential outcomes for treatment and control are independent of sample assignment for individuals with the same covariates and assignment to treatment. The assumptions are only met if every possible confounder associated with the response and the sample assignment is accounted for. In estimating the response surface, we use all demographic, socioeconomic, and pre-existing health condition data that were common in the OHIE and NHIS data. Potentially important unobserved confounders include the number of hospital and outpatient visits in the previous year, proximity to health services, and enrollment in other federal programs.
The final two columns of Table A2 compares RCT participants selected for Medicaid with population members on Medicaid. Compared to the RCT compliers, the population members who received treatment are younger, female, and more racially and ethnically diverse. Diagnoses of diabetes, asthma, high blood pressure, and heart disease are more common among the population on Medicaid then the RCT treated.
Strong ignorability assumptions may also be violated due to the fact that the OHIE applied a more stringent exclusion criteria compared to the NHIS sample. While the RCT and population sample both screened for individuals below the FPL, only the RCT required those enrolled to recertify their household income eligibility during the study period. Strong ignorability would not hold if the failure to recertify is correlated with unobserved variables.

No defiers
Angrist et al. [12] show that the bias due to violations of Assumption (5) is equivalent to the difference of average causal effects of treatment received for compliers and defiers, multiplied by the relative proportion of defiers, P(i is a defier)/(P(i is a complier]) − P(i is a defier)). Table A4 reports the distribution of participants in the OHIE by status of treatment assignment and treatment received. Assumption (5) does not hold due to the presence of defiers; i.e., participants who were assigned to control and enrolled in Medicaid during the study period. About 7% of the RCT sample were assigned to control but were enrolled in Medicaid (T i < D i ) and 66% of the sample complied with treatment assignment (D i = T i ), which results in a bias multiplier of 0.1. Suppose that the difference of average causal effects of Medicaid received on ER use for compliers and defiers is 1.2%. The resulting bias is only 0.1%, which would not meaningfully alter the interpretation of the PATT-C or CACE estimates.

Implied assumptions
We implicitly assume no interference between households in the OHIE because treatment assignment occured at the household level. Within-household interference is not possible in this RCT because household members share the same treatment status. Interference between households would threaten the no-interference assumption in the unlikely case that the Medicaid coverage of individuals in treated households affects the health care use of individuals in households assigned to control.
The implied exclusion restriction assumption ensures treatment assignment affects the response only through enrollment in Medicaid. It is reasonable that a person's enrollment in Medicaid, not just their eligibility to enroll, would affect their hospital use. For private health insurance one might argue that eligibility may be be negatively correlated with hospital use, as people with pre-existing conditions are less often eligible yet go to the hospital more frequently. This should not be the case with a federally funded program such as Medicaid.

Placebo tests
We conduct placebo tests to check whether the average outcomes differ between the RCT compliers on Medicaid and the adjusted population members who received Medicaid. If the placebo tests detect a significant difference between the mean outcomes of these groups, it would indicate bias arising from violations of the identification and modeling assumptions. We first perform a Test of One-Sided Significance (TOST) [34] that evaluates equivalence between the weighted distributions, with a failure to reject the null of a substantively large difference. The TOST p-value is below the conventional level of significance (p ≤ 0.05), indicating rejection of the null of a substantively large difference. Secondly, we conduct standard tests-of-difference and fail to reject the null of no difference. These results imply that the PATT-C estimator is not biased by differences in how Medicaid is delivered or health outcomes are measured between the RCT and population, or by differences in the unobserved characteristics of individuals in the sample or population.  do not account for noncompliance in the RCT, indicate a similarly sized negative effect on the number of ER visits and no effect on the number of outpatient visits attributable to Medicaid coverage. Our population estimates are consistent with the study of Kowalski [30], which extrapolates LATE estimates to the Massachusetts population and find negative LATEs on ER utilization.

Empirical results
The PATT-C estimates differ in direction and magnitude relative the CACE estimates on the OHIE sample, which can be explained by differences in the covariate distributions of the RCT sample and population. The CACE estimates indicate a positive and significant effect on primary care visits and no effect on ER use. The direction and magnitude of the CACE estimated treatment effect on ER use is almost identical to the corresponding LATE estimates on the RCT sample reported by Finkelstein et al. [24], although the authors uncover a significant (positive) effect of Medicaid on ER use.
Treatment effect heterogeneity in the population helps to explain the differences between the complier-adjusted sample and population treatment effect estimates. Figure 5 plots

Discussion
The simulation results presented in Section 4 show that the PATT-C estimator outperforms its unadjusted counterpart when the population compliance rate is low. Of course, the simulation results depend on the particular way we parameterized the compliance, selection, treatment assignment, and response schedules.
In particular, the strength of correlation between the covariates and compliance governs how well the estimator will perform, since S.1 of the estimation procedure predicts who would be a complier in the RCT control group, had they been assigned to treatment. If it is difficult to predict compliance using the observed covariates, then the estimator will perform badly because of noise introduced by incorrectly treating noncompliers as compliers. Further research should be done into ways to test how well the model of compliance works in the population or explore models to more accurately predict compliance in RCTs. Accurately predicting compliance is not only essential for yielding unbiased estimates of the average causal effects for target populations, it is also useful for researchers and policymakers to know which groups of individuals are unlikely to comply with treatment.
In the OHIE trial, less than one-third of those selected to receive Medicaid benefits actually enrolled. We accurately classified compliance status for 78% of treated RCT participants using only the pretreatment covariates as features. While we do not know how well the compliance ensemble predicts for the control group, the control group should be similar to the treatment group on pretreatment covariates because of the RCT randomization. The model's performance on the training set suggests that compliance is not purely random and depends on observed covariates, which gives evidence in favor of using PATT-C.
In the empirical application, the sample population differs in several dimensions from the target population of individuals who would be covered by other Medicaid expansions, such as the ACA expansion. For instance, the RCT participants are disproportionately white and over the age of 50. The RCT participants volunteered for the study and therefore may be in poorer health compared to the target population. These differences in baseline covariates make reweighting or response surface methods necessary to extend the RCT results to the population.
Explicitly modeling compliance allows us to decompose population estimates by subgroup according to pretreatment covariates common to both RCT and observational datasets; e.g, demographic variables, pre-existing conditions, and insurance coverage. We find substantial treatment effect heterogeneity in terms of race, education, and health status subgroups. This pattern is expected because RCT participants volunteered for the study.

B Tables & Figures
Notes: cross-validated error and weights used for each algorithm in super learner ensemble. MSE is the ten-fold cross-validated mean squared error for each algorithm. Weight is the coefficient for super learner, which is estimated using non-negative least squares based on the Lawson-Hanson algorithm. R package used for implementing each algorithm in parentheses. #preds. is the number of predictors randomly sampled as candidates in each decision tree in random forests algorithm. α is a parameter that mixes L1 and L2 norms. degree is the smoothing term for smoothing splines.