Civilian national service programs can powerfully increase youth voter turnout

Significance Enrolling young people to participate as Teach For America (TFA) teachers has a large positive effect on rates of voter turnout among those young people who participate. This effect is considerably larger than many previous efforts to increase youth voter turnout. Each year, TFA places thousands of young adults in 2-y teaching positions in disadvantaged communities around the United States. After their 2 y of service, we find that these young adults vote at a rate 5.7 to 8.6 percentage points higher than that of similar nonparticipant counterparts. Our results suggest that civilian national service programs targeted at young people show great promise in narrowing the enduring participation gap between younger and older citizens in the United States.


A.1 Data Details
The original file of all applicants who advanced to the final stage of the TFA admissions process contained 134,808 observations. 5,463 applicants with contact restrictions, 7,221 applicants with invalid email addresses, 1,568 duplicate cases (linked to applicants who applied to TFA multiple times), and 88 applicants with no selection score were removed. Regarding the survey of TFA applicants, to ensure applicants who applied more than once would only be contacted once, Mo and Conn (2018) preserved contact information for only the most recent application year (1). The remaining 139 applicants were removed when checking for duplicate errors. They utilized the application file only, and did not update the contact information for alumni to ensure that the share of contact information errors in the file would be the same for admits and non-admits.
All of these applicants were also targeted by Mo and Conn (2018) between October 1, 2015 and March 31, 2016 to complete an online survey; 32,595 of these targeted applicants (27.1 percent) responded to some portion of this survey and 24,886 applicants (21.6 percent) completed the survey (1). After removing non-citizens and individuals who did not provide a current state of residence, we were left with a sample of 28,662 potential voters.
Details of the data we received from Teach For America, as well as the items we used responses (indicated with the label "Survey item") from the original online survey administered between October 1, 2015 and March 31, 2016, are provided below. Exact question wording and information on our response recoding of question items that were recoded are provided.

Application
Year -The cohort an applicant was applying for was provided. (Response Options: 2007, 2008, 2009, 2011, 2012, 2013, 2014, and 2015 2. Graduation Year -The year in which the applicant graduated with a bachelor's degree. (Open-ended response (yyyy)) 3. Admission Score -Applicant's final admission score was provided. Only individuals who made it to the final round of the admission process received an admission score, and our target sample focused on individuals that made it to this final round only. 4. Admission Cutoff Score -Information on the cutoff score was provided for each application year. To combine cohorts, we standardized each year such that the cutoff is at 0, higher values indicate scoring better, and values can be interpreted as the number of standard deviations away from the cutoff the applicant was. 5. Admission Decision -Information on whether an applicant was admitted into TFA was provided. (Response Options: 0 = No; 1 = Yes) 6. Matriculation Decision -Information on whether an admitted applicant matriculated into TFA was provided. (Response Options: 0 = No; 1 = Yes) 7. Email Information -Up to two email addresses were provided for each applicant. 8. Current Address -Mailing address shared by each applicant when applying to TFA was provided. We used the state information from the mailing address. 9. University -Undergraduate university or college each applicant graduated from was provided. We used the state location of the university or college.
10. Mobile Phone -Mobile phone number shared by each applicant when applying to TFA was provided. We used the area code to identify state information.
11. State -Survey item: "In which state do you currently reside?" [Dropdown menu with all 50 states, Washington, D.C., and US territories]

Demographic Characteristics
1. Age -The applicant data provided by TFA contained information on applicant birth date, which could be used to compute an applicant's age at the time of the survey. The survey also asked: "What year were you born?" Respondents indicated the year in which they were born, and this was recoded such that the variable indicates their age in years. For all analyses aside from descriptive analyses, the variable was coded to be between 0 and 1.

2.
Female -The applicant data provided by TFA contained information on applicant gender. The survey also asked: "What is your gender?" (Response Options: 0 = Male; 1 = Female) 3. Ethnicity -The applicant data provided by TFA contained information on applicant race/ethnicity. The survey also asked: "What racial or ethnic group best describes you?" (Response Op

A.3 Identifying Assumptions
First, we check for differential attrition by examining the distribution of response rates at the threshold for admission. As we show in Figure S.1, response rates are continuous at the admission score cutoff, allaying concerns that our estimates could be biased by differences in the probabilities "treated" and "comparison" applicants responded to our survey. Our reduced form estimate of the effect of scoring above the TFA cutoff on survey completion rates is 0.0002 and statistically indistinguishable from zero (p = 0.978). We conclude that applicants near the margin for admission were equally likely to take our survey. Notes: This figure plots survey response rates by admission score and includes a 95 percent confidence interval. The dots that represent the binned averages are sized to indicate number of observations. We re-centered the admission score distribution such that zero represents the cutoff score for each year. We then standardized admission scores by year. The bin size is 0.05.
Secondly, we show in Figure S.2, the probability an individual participates in TFA is a discontinuous function of that selection score. Specifically, among survey respondents, individuals who score just above TFA's admission cutoff, c, are 21.99 percentage points more likely to become TFA teachers than those who just barely miss the threshold for admission (F = 64.41). As such, the TFA cutoff acts as a strong instrument to isolate plausibly random variation in an individual's TFA participation status.
Thirdly, we test for differences in the observable characteristics of individuals who scored just on either side of the admission score cutoff to ensure that individuals just below the cut-

Fig. S.2: First Stage Results
Notes: Figure 1 plots the fraction of individuals who matriculated in TFA by admission score. We re-centered the admission score distribution such that zero represents the cutoff score for each year. We then standardized admission scores by year. In each figure, the bin size is 0.015. The circle size represents the density of observations within each bin. The sample includes all TFA applicants who responded to our survey.
off are similar to those just above the cutoff. As we summarize in Figure S.3, the reduced form estimate of the jump in each baseline applicant characteristic at the cutoff among survey respondents is zero, signifying that there are no differences in observable applicant characteristics at the cutoff. (We likewise find that observable pre-treatment measures trend smoothly at the cutoff when we use the full pool of applicants to TFA instead of just those who responded to our survey.) Finally, we test for possible manipulation of selection scores both visually and empirically ( Figure S.4). The TFA admission officers do not share the admissions score cutoff with interviewers. Consistent with this policy, neither of these exercises suggests either selectors or applicants were aware of the cutoff. Specifically, when we conduct a test for score manipulation (2), we observe that the density of admission scores is continuous at the cutoff, both for the entire TFA applicant pool (p = 0.754) and for the individuals who responded to the survey (p = 0.925).

A.4 Matching Procedure
To match the TFA applicants with their records in the voter file, we used the fastLink algorithm (see (3) for details on the algorithm and its application to linking with voting records). We relied on three pieces of information about applicants to locate their vote histories: their name, their birth year, and their state of residence. The fastLink algorithm calculates the posterior likelihood of a link between an applicant and an entry in the voter file. We matched fully on first name and sex (i.e. the sex and first name of a voter file entry and applicant must be exactly the same to count towards the match likelihood) and partially on last name and birth year (i.e. voter file entries and applicants are considered better or worse matches based on the string/year distance between their last names and birth years). Matches with a posterior probability of less than 0.85 were discarded. 1 First name, last name, and sex were available for nearly all applicants from TFA records. Birth year was available from TFA and from the Mo and Conn (2018) survey (1). Data on birth year from the survey is available at similar rates for admitted and non-admitted applicants, but data on birth year from TFA is missing for many non-admitted applicants in the four application cohorts between 2010 and 2013. For this reason, many results presented here rely on matches based on birth year information from the survey.
In the analyses which use the TFA birth year from the administrative data, we drop the birth year information for the 2010-2013 cohorts and proxy for respondents' birth years by subtracting 22 years from their date of college graduation, as birth year information is only provided for TFA participants in those years. With graduation year missing for only 3 applicants and a plurality of applicants graduating from college at age 22, this proxy method allows us to approximate birth year for all applicants in a uniform manner. Importantly, this proxy method produces an estimate of birth year that is similarly accurate on both sides of the cutpoint, as illustrated in Figure S.5, which shows the proportion of applicants with birth years equal to, one year from, or 2-3 years from the proxy measure of age on either side of the cutpoint. There is no statistically significant difference at the cutpoint in the accuracy of the proxy measure in any of these cases (p = .39, .97, .99 respectively).
We carried out the matching process separately for each state's voter file, only looking for matches between each state's voter registrants and applicants we identified as potential residents of that state. Potential states of residence were identified based on four possible sources: the state of residence reported on the Mo and Conn (2018) survey (1), the state of residence reported on the TFA application, the area code of the cell phone number on the TFA application, and the state of the applicant's university. We searched for matches for an applicant in every state linked to them by at least one of these four sources.
Analyses reported in the paper use one of two sets of matches: those from the state each applicant reported in the survey, and those from a combination of application-based sources (which we call the "hybrid" strategy). For the hybrid strategy, we first use matches from the state listed as an applicant's current address in their application; if there is no match available from the state in their current address, we use matches from the state of their university, then matches from the state of their cell phone number's area code. The Data Trust voter file for any given state includes records for people who have registered to vote in that state and have not been purged from the voter file, as well as some records from commercial sources. The file therefore includes a mix of registered voters and people who are not or are no longer registered to vote. In all, 70.5% of survey respondents were matched to a record in the voter file in the state of residence they indicated on the survey. For the full sample of 120,329 final-round applicants, we had up to three different sources of location data in the TFA administrative data: their addresses, phone numbers, and universities. After restricting the sample to voting-eligible applicants, we found potential states of residence in the TFA administrative data for more than 99% of applicants through these sources. For the 59% of applicants who were linked to multiple states through these sources, we searched for records in each state; if multiple records were discovered, we prioritized them in the following order: current address, university location, and cell phone area code. We found matches for 41% of voting-eligible applicants in the states of residence indicated by their application. The match rate is lower because the application state information is more likely to be outdated at the time of the voter file data compilation in 2017.

A.5 Outcome Measures
The key outcome variable used here measures whether an applicant voted in either the 2012 or 2014 national general election. Though data on earlier elections is available, the removal of voters due to state maintenance of voter files makes it more difficult to match applicants in earlier years. Applicants who were ineligible to vote in both elections due to their age are coded as missing.
To calculate post-treatment effects, we use an outcome measure capturing whether an appli- To calculate pre-treatment effects, the outcome measure captures whether an election took place before the applicant applied to TFA. Any elections that took place before the application are not considered.

A.6 Regression Discontinuity Analysis
We employed a fuzzy regression discontinuity design (RDD) to estimate the causal effect of TFA participation on voter turnout. Each applicant i who advanced to the final round of the TFA admission process received a selection score, X i . We define our instrument, Z i , as follows: The results referred to as the "ITT," or intent to treat, capture the causal effect of having an application score above the cutpoint on turnout. The results referred to as the "CACE," or complier average causal effect, capture the causal effect of participation in TFA on compliers, instrumented by an application score above the cutpoint. Compliers are applicants who would matriculate if and only if they receive a score above the cutpoint; because the application score is not a perfect predictor of matriculation on either side of the cutpoint, we employ a "fuzzy" regression discontinuity. The estimates are local to applicants with scores at the cutpoint.
We estimate results using the "rdrobust" package in R (5). The rdrobust package selects a bandwidth for analysis to optimize mean squared error, calculates the CACE or ITT using a local polynomial estimator, adjusts the calculated CACE/ITT for bias induced by bandwidth selection, and calculates robust standard errors that account for the bias correction. We use a triangular kernel to weight observations; results are substantively similar with other weighting schemes.
To estimate the effect of having a score just above the cutpoint compared to one just below, we estimate an equation of the following form: represents the conditional expectation of turnout at a particular application score. lim Z→z + E[Y |Z = z], then, represents the limit of the conditional expectation function as the application score approaches the cutpoint from above; the second half of the equation is the limit as it approaches the cutpoint from below. The ITT, then, could be interpreted as the difference in expected turnout between a just-admitted and a just-rejected applicant with scores exactly at the cutpoint.
In a fuzzy regression discontinuity setup as we employ here, the complier average causal effect (CACE) can be represented like this: where M is a binary variable measuring whether an applicant matriculated in Teach for America. That is, we divide the ATT by the difference in the expected probability of matriculating in TFA for applicants with scores just above and just below the cutpoint.
We use the following code to estimate the ITT: rdrobust(y=dv, x=z, all=TRUE), where "dv" is the outcome variable and "z" is the application score. We use the following code to estimate the CACE: rdrobust(y=dv, x=z, fuzzy=matric, all=TRUE), where "matric" is an indicator for whether an applicant matriculated in TFA.  Note: Points to the right of 0 are those who were above the admittance threshold, whereas those to the left are those that are below it. Each sub-figure reports the complier average causal effect (i.e. the CACE) and the intention to treat effect (i.e. the ITT) annotated above the TFA admittance threshold. Therein, the standard errors are in parentheses below the coefficient estimates. The figures show the average levels of voter turnout by selection score. Lines are 4 th degree polynomials fitted separately on either side of the cutpoint (6). Binned values are sized to indicate number of observations.

A.8 Tests of Treatment Effect Heterogeneity
Figure S.7 tests for treatment effect heterogeneity by race, gender, region of origin, and federal Pell Grant status (a proxy of socioeconomic status). When it comes to treatment effect heterogeneity, these effects are similar for whites and non-whites and across several geographic areas in the United States, but appear to be larger for males than females and perhaps for non-Pell Grant recipients than Pell Grant recipients.

A.9 Alternative Dependent and Independent Variables
This section first replicates the analyses in the main text with alternative dependent variables. First, Figure S.8 shows the effect of TFA acceptance and matriculation on registration to vote. This variable measures whether an applicant was located in the voter file as registered to vote at any point covered by our data. The analysis using matching strategy 2 suggests a significant positive effect on registration; the effects found using strategy 1 are not significant, though they point in the same direction.  Again, there does not seem to be evidence of a positive effect of treatment on participation before receiving the full "dosage" of two years' participation. These results should be interpreted with caution. The samples, limited to a single cohort each, are substantially smaller than those in the analyses reported in the main text. They are also more prone to measurement error because they rely on only 1 election to measure turnout, rather than the longer periods used for the dependent variables in the main text. However, in sum, we are not able to detect consistent positive effects of TFA on voter turnout when participants are still in the early stages of the program.
We next expand the scope of elections included in the dependent variable. Analyses in the main text are restricted to the 2012 and 2014 elections, as regular voter file maintenance means many records of people voting in earlier elections have been purged from our data. Figure S.14 shows results including the 2008 and 2010 elections. The post-TFA results are nearly identical to the results in the main text, which reflect the fact that no applicants were "post-TFA" in the 2008 election, as well as that few people in our sample voted in 2010 but not later. The pretreatment results show different point estimates for the ITT and CACE than the results in the main text, but as in the main text, there is no significant pretreatment effect of a barely-passing score on turnout for three of the four specifications.
Finally, we repeat analyses replacing the "matriculation" variable compliance indicator with an indicator for whether a participant completed the TFA program. This data is only available for those in the application cohorts 2007-2011. This analysis addresses the concern that the treatment effects of the matriculation variable are being driven by matriculated participants who left the program in the course of completion. However, the effect is stronger using the completion indicator than the matriculation results in the main text, a result consistent with the notion that participants who complete the program are driving the main findings.     Note: Points represent effect estimates for completion of the TFA program, instrumented by applicants' scores. There are many mechanisms by which TFA may produce the effects outlined in the paper. In this paper, it is not our goal to adjudicate which of these mechanisms are most likely, as 1.) our primary goal is to answer the first-order question of whether experience with TFA causes an increase in voter turnout; and 2.) causal mechanism testing is inherently difficult, if not impossible (7,8). This second point is especially true when testing the effects of larger, more immersive programs. That all being said, here we articulate some of the theoretical reasons why experiences with TFA may combine to produce a sizable increase in voter turnout. Note that the list below is not meant to be comprehensive, but, rather, to illuminate that there are many reasons to expect TFA experience impacts voting.

A.10 Proposed Congressional Legislation on National Service
The first reason why TFA may increase voter turnout rates of its teachers is that it may help participants see the need for public policies to address inequality. TFA participants are placed in disadvantaged communities and are intimately exposed to various social maladies and inequities. TFA participants may look to government institutions as a forum for addressing social ills (1), especially since their placement involves working in the public sector as TFA participants are public school educators. Second, TFA may help participants build skills and beliefs that empower them to engage in politics. As hands-on instructors, TFA participants learn social and self-regulator skills and the ability to work with people from a variety of backgrounds, all of which may promote voting (9)(10)(11). Third, TFA may build social connections-with one's fellow teachers, the TFA network, students, and parents/community leaders-which encourage political action. Stronger social ties may foster an enhanced sense of civic duty to be involved in various forms of community and political engagement (12). Fourth, for teachers that remain in teaching, former TFA teachers have a direct stake in educational policy outcomes and may engage to preserve that stake. We note that we can rule out systematic efforts to get out the vote on the part of the national service organization as a mechanism for increasing political participation. As "engaging in activities designed to influence the outcome of an election to Federal office or the outcome of an election to a State or local public office" is prohibited actively for AmeriCorps programs under Section 1310 of the Serve America Act, TFA and other AmeriCorps programs specifically do not do anything to encourage participants to register to vote or vote in an election (13).
Of note, the fact that the TFA treatment involves being a classroom teacher, may be driving some of the observed effects. While we cannot establish or rule out whether teaching drives our effects, extant research on teacher voter turnout suggests that our observed effects are not being fully driven by teaching (and being mobilized by teacher's unions and large teacher associations like the National Education Association). However, it is plausible that our effects are larger than effects we may observe from a national service program that does not involve teaching in the classroom. Extant research on voter turnout shows that while teachers vote at higher rates than average citizens (14,15), it is not clear that being a teacher causes individuals to vote at higher rates. For example, those who choose to become teachers may also be more public spirited, and hence, more politically active (15). Consonant with this point, recent research has shown that past research arguing that public sector employees, which includes many teachers as teachers are 18 percent of the state and local government workforce (16), vote at higher rates have major selection bias problems (17). This work finds that when you address omitted variable bias, you do not see evidence that being a government employee causes greater voter turnout. With that said, when we examine studies of just government employees who are educators, there is some suggestive evidence that teaching can cause an increase in voter turnout stemming from occupational self-interest and/or union mobilization. Examining five school districts in California in 1999, previous research finds that teachers who work and live in the same district turnout for local elections with school board races vote at higher levels than teachers who live in a different district than the district in which they work (15,18). However, there is no clear evidence that teachers who do not live in the same district in which they work, and hence, do not have an occupational stake in their local school board elections and are less likely to be mobilized by the unions to vote, turnout at the ballot box at higher rates than the average citizen (15,18). In other words, it is not clear that teaching uniformly causes greater voter turnout.
Though our data does not allow for a well-identified test of teaching alone on turnout, we can look at turnout among TFA applicants who were not admitted to the program and compare those who nonetheless became school teachers to those who did not. Non-admitted applicants who became teachers were not significantly more likely to vote than applicants who chose other professions, controlling for birth year, gender, race, and receipt of a Pell Grant (see the first column of Table S.3). This suggests that, within the population of college students who were interested enough in teaching to apply to TFA, those who chose to become a teacher outside the program did not participate in politics at higher rates than those who did not.
Broadening our focus somewhat, we can look at voter turnout rates among young people who do and do not work in education in the general population. We use the 2012 CCES to compare turnout in the 2012 election among people who work in education to those who do not, controlling for gender, age, and race. We limit the sample to respondents born after 1985 with four-year college degrees, to make the results more comparable to the TFA sample analyzed above. As in the TFA analyses, young people who become teachers are not more likely to vote than thoes in other professions. Taken together, these results suggest that teaching alone is unlikely to produce increases in turnout of the magnitude we observe among TFA participants (see the second column of Table S.3). However, these results are only suggestive, and future work should pursue well-identified designs to understand the causal effect of becoming a teacher.
On the flip side, there are several reasons to expect that TFA may actually demobilize its participants. First, TFA participants are highly mobile. Being uprooted from one's social community may be disruptive enough to decrease rates of voting (19,20). Second, some have argued that TFA could make participants less trustful of U.S. institutions and, as such, less engaged in politics (21). However, theoretically it is somewhat unclear whether decreased trust in this context would actually mobilize TFA participants by making them want to work to change the governmental systems in which they work. Ultimately, which of these mechanisms "win out" is an empirical question-one that we explore in this paper.

B.2 Careers of Non-Admits
Survey respondents were asked the following question: "We will now ask you about the last three jobs you have held since 2007. For each position, what is your job title, sector, and start and end date for each of these positions?" Figures that break down the share of non-admits in each job sector are provided below.

Notes:
Survey respondents were asked about the last three jobs they have held since 2007. This figure displays the percentage of non-participant respondents in each job sector for their third job.