Elsevier

European Economic Review

Volume 91, January 2017, Pages 89-117
European Economic Review

Preferences, selection, and value added: A structural approach

https://doi.org/10.1016/j.euroecorev.2016.09.009Get rights and content

Abstract

What do applicants take into consideration when choosing a high school? To what extent do schools contribute to their students' academic success? To answer these questions, we model students' preferences and obtain average valuation placed on each school. We then investigate what drives these valuations by carefully controlling for endogeneity using a set of creative instruments suggested by our model. We find that valuation is based on a school's location, its selectivity as measured by its cutoff score, value added and past performance in university entrance exams. However, cutoffs affect school valuation an order of magnitude more than does value added.

Introduction

In much of the world, elite schools are established and very often subsidized by the government. Entry into these “exam” schools is based on performance in open competitive entrance exams. Applicants leave no stone unturned in their quest for higher scores on these entrance exams, creating enormous stress. The belief seems to be that getting into these schools is valuable, presumably because future outcomes are better in this event. Students, it is argued, will do better by going to an exam school where they are challenged by more difficult material and exposed to better peers. What actually happens? Students of these elite exam high schools, without a doubt, do better on college entrance exams and are more likely to be placed at the best university programs. But is this due to selection or value-added by these schools? It is quite possible that the success of students from exam schools creates the belief that these schools add value. This belief results in better students sorting into exam schools so that students from these schools do better, which perpetuates the belief system.

The usual way of ranking schools is in terms of their selectivity, how hard they are to get into in terms of some performance measure like the SATs in the US,1 or in terms of how well students who graduate from them do as measured by wages, eminence in later life, or admission into further schooling. However, schools may do well in all of these dimensions merely because they admit good students and not because they provide value added and thereby improve the performance of the students they admit.2 How can we control for such selection and estimate value added? What do students seem to value? Can we model and estimate their preferences? These are the questions we try to address.

Turkey is a good place to look for answers to these questions for a number of reasons. To begin with, the Turkish admissions system is exam-driven. Admissions are rationed on the basis of performance on open competitive national central exams at the high school and university level. This eliminates incentive problems when there are a large number of students.3 Second, as education is highly subsidized in public institutions, educational options outside the country or at private institutions are much more expensive so that these exams are taken seriously by the applicants. When the stakes are high, as in Turkey, it is less likely that outcomes are driven just by noise.

We develop a way to answer the questions of interest by taking a more structural approach than much of the literature. The structure imposed allows us to economize on the data requirements. Our data consists of information on all high schools (Exam Schools) in Turkey which admit students on the basis of an open competitive exam administered at the end of middle school. Not all middle schoolers take this exam as it is voluntary. We obtained (from public sources) the admission cutoff scores of each exam school, the number of seats in each such school, and the overall distribution of scores of students who chose to take this exam. For one school only, we also have the distribution of scores of admitted students. We also have the mean performance of students in each exam high school in the university entrance exam. We would like to emphasize that we do not have individual level data on performance in the high school (or university entrance) exam or on stated preferences for high schools.

We use this data in Section 3 to estimate a nested logit model of preferences over high schools, taking into account that exam schools only admit the highest scoring students who apply. Thus, students choose their best school from schools whose cutoff is below their score. We estimate preferences in two steps. First, by using information on the minimum cutoff scores, we derive the demand for each school, conditional on the correlation of shocks within a nest. We obtain the mean valuation for each school by setting demand equal to the number of available seats and solving for mean valuation. Second, we pin down the correlation of shocks within a nest using information on the maximum and minimum cutoff scores in each school. This twist, to our knowledge, is novel. The idea is quite simple. If preference shocks are perfectly correlated within a nest, then preferences are purely vertical and the minimum score in the most valued school in the nest cannot be lower than the maximum score in the second most valued school in the nest. Thus, the extent of overlap in the scores between schools within a nest identifies the correlation in preference shocks in the nest.

Finally, in Section 3, to see what applicants care about in a school, we regress the mean valuation of schools on the schools' characteristics (its location, size, mean performance in the verbal and quantitative parts of the university entrance exam, type of school, and the cutoff score). The error term, which is meant to capture shocks to school valuations, is likely to be correlated with the cutoff, as greater valuations raise demand and hence the cutoff, biasing the estimates upwards. We use a clever instrument suggested by our model to correct for endogeneity bias. We find that selectivity does indeed seem to raise valuations.

Section 4 focuses on value added. In this section we restrict attention to a subset of schools (Science high schools). To understand the value added by a school, we use the data on the overall distribution of scores on the high school entrance exam, along with the estimated preference parameters, to allocate students to high schools and obtain the simulated distribution of students' scores on the high school entrance exam in each school. We then compare the mean of the simulated distribution for each school to its mean score in the University Entrance Exam after standardizing the scores. This gives a (possibly contaminated by mean reversion) estimate of the value-added by a school. Mean reversion is likely to be especially severe at the top and bottom of the school hierarchy as it is a consequence of randomness in performance. Students in the best (worst) schools disproportionately include those who are just lucky (unlucky) so that their performance in the university entrance exams will tend to be below (above) that in the high school entrance exams even if there is zero value added. We use simulation-based methods as well as information on each student in a single school to estimate the average value-added by a school, while controlling for mean reversion. Note that the extent of the mean reversion depends on both preferences and the extent of noise in the high school entrance exam score so that correcting for it can only be done by taking a structural approach. Finally, we ask if value added also drives mean valuation of a school.

Our results show that highly valued schools do not all have high value-added. Some have negative value added, while others have positive value added. Our estimates suggest that students like more selective schools so that better students, who have more options open to them, sort into these schools. We also find that they also care about the value added by the school, but its importance (in a standardized regression) is far less than that of the cutoff score. Consequently, even when schools do not add value to the students (in terms of their performance on the university entrance exam) they attract good students, providing an advantage to incumbents and an impediment to entry and the functioning of the market.

A major contribution of our paper is to relate the valuations placed on schools to measures of school characteristics such as selectivity, facilities, location and value added by the school. It is important to understand what lies behind preferences. If preferences seem to be driven by selectivity alone, selective schools need not be those that are adding the most value and circular causation will drive rankings. Preferences may be unrelated to school performance (i.e., value added) either because it is hard to observe, or because of other advantages of selective schools like consumption or network value. If the former, a case could be made for publicly providing value added school by school.

Exam schools in Turkey are given more funding per student, and they have higher teacher to student ratios. The better off are also more likely to be able to get into these schools (For example, see Caner and Okten, 2013) so that such funding is likely to be a regressive force.4 Providing funding based on value-added by a school may make such funding less regressive, as well as better align the incentives of schools and society. Though our data is on Turkey, the issues raised in this paper are of universal interest.

We proceed as follows. First, we relate our work to the literature. In Section 2, we provide the necessary background regarding the Turkish system and the data. Section 3 lays out the model, the estimation of preferences and the first results on what seems to drive valuations. In Section 4, we estimate the value-added by the schools and examine whether it is related to valuations. Then, we conclude. Additional figures, tables and details about the estimation strategy can be found in the Appendix.

There is a large literature that deals with school choice and school effects in the US, as well as in other developed and developing countries. While the evidence on the effect of attending more selective schools on students' academic achievement is mixed, one possible interpretation is that just attending a more selective school is not enough to improve future performance. An essential input might also be the motivation of students. This could be manifested in the data as they are choosing to attend a more selective school, despite it being harder in some dimension for them to do so.

In the US, the consensus seems to be that expanding school options, via having exam schools or having lotteries that allow the winners more school options, need not have much of an impact on a student's academic achievement. Abdulkadiroğlu et al. (2014) and Dobbie and Fryer Jr (2011) find going to exam schools has little effect on academic achievements using a Regression-Discontinuity approach in Boston and New York, respectively. Cullen et al., 2005, Cullen et al., 2006 use data from randomized lotteries that determine the allocation of students in the Chicago public school system. Students who win the lottery have more options open to them and so can attend schools more consistent with their tastes and hence are better off.5 They find that winning this lottery does not improve students' academic performance. In contrast to work that finds little effect of expanding school options, Hastings et al. (2012) use U.S. data from a low income urban school district and find that winning this lottery has a positive and significant effect on both student attendance after winning the lottery but before going to the new school, and on performance later on. Clark (2010) investigates the effect of attending a selective high school (Grammar School) in the UK (where selection is based on a test given at age 11 and primary school merit) and finds no significant effect on performance in courses taken by students, although the probability of attending a university is positively affected.

Dale and Krueger, 2002, Dale and Krueger, 2011 look at the effect of attending elite colleges on labor market outcomes. They control for selection by controlling for the colleges to which the student applied and was accepted. The former provides an indication of how the student sees himself, while the latter provides a way of controlling for how the colleges rank the student. Intuitively, the effect of selective schools on outcomes is identified by the performance of students who go to a less selective school despite being admitted to a more selective one, relative to those who go to the more selective one. Of course, if this choice is based on unobservables, this estimate would be biased.6 They find that black and Hispanic students as well as students from disadvantaged backgrounds (less-educated or low-income families) do seem to gain from attending elite colleges. However, for most students the effect is small and fades over time.

Duflo et al. (2011) emphasize an important additional channel that may induce heterogeneity in effects. If, for example, teachers at a top ranked school direct instruction towards the top, ignoring weaker students, better students in the class may gain, while worse ones may lose. In such cases the value added of a school can vary by student ability and tracking may help teachers target students.

Card and Giuliano (2014) look at the effect of being in gifted programs. They find that high IQ students, to whom these programs are often targeted, do not seem to gain from such programs. However, students who miss the IQ thresholds but scored highest among their school/grade cohort in state-wide achievement tests in the previous year do gain. Their work suggests that “a separate classroom environment is more effective for students selected on past achievement – particularly disadvantaged students who are often excluded from gifted and talented programs.”

In contrast to these results, Pop-Eleches and Urquiola (2013) and Kirabo Jackson (2010) estimate the effect of elite school attendance in Romania and Trinidad and Tobago, respectively. They find a large positive effect on students' exam performance in the university entrance exams. This could be because students who go to elite schools in these countries are more motivated to succeed than those going to elite schools in richer countries.

From the school choice literature, Hastings et al. (2008) and Burgess et al. (2009) investigate what parents care about in a school using data from the Charlotte-Mecklenburg School District and Millennium Cohort Study (UK), respectively. Hastings et al. (2008) take a structural approach and estimate a mixed logit model of preferences. A major contribution of their work is to use information on the stated preferences for schools and compare these to what was available to them to back out the weight placed on factors like academics, distance from home, and so on. They are then able to see whether the impact of a school differs according to “type”. They find that students who put a high value on academics in so much as they choose to go to “supposedly good” schools that are further away, do better from being there than students who attend just because they are close to the school. For this reason, reduced form effects estimated for attending “good” schools could be biased when such selection is not properly accounted for. If students in developing countries place greater value on good schools than do students from developed countries, this insight could explain why we see such different results for attending better schools in the two. Burgess et al. (2009) also compare the first choice school, to the set that was available (constructed by the authors by using students' residence areas) and estimate trade-offs between school characteristics.

An advantage of the slightly more structural approach taken here is that we examine the whole process and not just one of its components in the estimation. Our approach allows us to separate between preferences over schools (based on their observable attributes) and their value added. Moreover, despite the lack of panel data, i.e., not having the high school entrance exam score and the college entrance exam score for each student, we show how one can use fairly limited data on each high school, along with data on university entrance exam takers along with the model, to get around this deficiency. That is to say, our approach allows us to economize on data in the estimation which extends the ability to look at policy issues, as outcomes may differ across countries. Of course, the applicability of our method depends on certain institutional characteristics, such as priority by score in the allocation of students to high schools, and the existence of uniform school leaving or university entrance exams. While our approach has many limitations, for example, it does not allow us to look at whether attending elite exam schools has heterogeneous effects, nor does it let us look at long term effects (which may be large even if short term ones are not7) as in Chetty et al. (2014a), it does provide a way to do a lot with relatively little data.

Section snippets

Background

In Turkey, competitive exams are everywhere. Unless a student chooses to attend a regular public high school, he must take a centralized exam at the end of 8th grade to get into an “exam school”. These are analogous to magnet schools in the US, though the competition for placement into them is national and widespread, rather than local as in the US. After high school there is an open competitive university entrance exam given every year. Most students go to cram schools (dershanes) to prepare

The model

Seats in public exam schools are allocated according to students' preferences and their performance on a centralized exam (conducted once a year). All schools have an identical ranking over students based on their test scores. Each exam school has a fixed quota, qj, which is exogenously determined.11 The allocation process basically assigns students to schools according to their

Value-added by high schools

In the previous section, we estimated the preference parameters and simulated the high school entrance exam scores for students in each school. We allocated students to schools on the basis of the estimated preference parameters and the overall score distribution using simulations. In this section, we estimate the value-added by a school in terms of their students' academic performance. Here we are limited by the data. We do not have a panel, so we cannot match the score the student obtained on

Conclusion

Schools are hard to evaluate in the real world. Unlike most experience goods, where consumers can know how much they like the good upon consuming it, with schooling, liking the experience is only part of what people care about. They care about attributes, like reputation or selectivity that might signal something, as well as the value-added by the teaching in the school. Since consumers are unlikely to have information about the latter, even if they have information about the former,

Acknowledgments

We would like to thank Nikhil Agarwal, Verónica Frisancho, Paul Grieco, Susumu Imai, Sung Jae Jun, Corinne Jones, Mark Roberts and Cemile Yavas for comments on an earlier draft. We would also like to thank participants of the CES-IFO Area Conference on Applied Microeconomics 2013 and the Penn State Applied Econ Conference 2013 for their useful comments on an earlier draft. We benefited from the helpful comments of seminar participants at University of Kentucky and Warwick University. All errors

References (40)

  • Burgess, S., Greaves, E., Vignoles, A., Wilson, D., 2009. What Parents Want: School Preferences and School...
  • Cameron, A.C., Kim, N., 2001. Simulation Methods for Nested Logit Models. Department of Economics, University of...
  • Card, D., Giuliano, L., 2014. Does Gifted Education Work? For Which Students? Technical Report, National Bureau of...
  • K.Y. Chay et al.

    The central role of noise in evaluating interventions that use test scores to rank schools

    Am. Econ. Rev.

    (2005)
  • R. Chetty et al.

    Measuring the impacts of teachers IIteacher value-added and student outcomes in adulthood

    Am. Econ. Rev.

    (2014)
  • Chetty, R., Friedman, J.N., Rockoff, J.E., 2014b. Prior Test Scores do not Provide Valid Placebo Tests of Teacher...
  • Chetty, R., Friedman, J., Rockoff, J.E., 2015. Measuring the Impacts of Teachers: Response to Rothstein...
  • D. Clark

    Selective schools and academic achievement

    BE J. Econ. Anal. Policy

    (2010)
  • J.B. Cullen et al.

    The effect of school choice on participantsevidence from randomized lotteries

    Econometrica

    (2006)
  • Dale, S. Krueger, A.B., 2011. Estimating the Return to College Selectivity over the Career Using Administrative...
  • Cited by (25)

    • Market design

      2021, Handbook of Industrial Organization
      Citation Excerpt :

      Accordingly, we must disentangle two sets of preferences from only observing one set of matches. This model is used by Fack et al. (2019) to study Parisian high school admissions, which are determined by a deferred acceptance mechanism, and by Akyol and Krishna (2017) to study Turkish high schools that use an entrance exam to make admissions decisions. This assumption can also be used to study higher education settings that use an entrance exam.

    View all citing articles on Scopus
    View full text