Skip to main content

Advertisement

Log in

Estimating the additionality of R&D subsidies using proposal evaluation data to control for research intentions

  • Published:
The Journal of Technology Transfer Aims and scope Submit manuscript

Abstract

Empirical examination of whether R&D subsidies crowd out private investments has been hampered by selection problems. A particular worry is that project quality and research intentions may be correlated with the likelihood of receiving subsidies. Using proposal evaluation data to control for research intentions, we do not find strong evidence suggesting that this type of selection creates a severe bias. Proposal evaluation grades strongly predict R&D investments and reduce selection bias in cross-sectional regressions, but there is limited variation in grades within firms over time. Hence, in our sample, unobserved project quality is largely absorbed by firm fixed effects. Our best estimate of the short-run additionality of R&D subsidies is 1.15, i.e., a one-unit increase in subsidy increases total R&D expenditure in the recipient firm by somewhat more than a unit. We demonstrate, however, that there is measurement error in the subsidy variable. Additionality is therefore likely to be underestimated.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. A fourth relevant study is Serrano-Velarde (Serrano-Velarde 2008) using quantile regressions and regression discontinuity to estimate the impact of R&D subsidies on firm R&D investment under the French ANVAR program. Serrano-Velarde utilizes a discontinuity resulting from program specific eligibility requirements related to form of ownership, rather than proposal evaluation grades. We briefly summarize studies that use the regression discontinuity design in Henningsen et al. (2012), Appendix B.

  2. Arora and Gambardella (2005) estimate the effect of grants from the National Science Foundation (NSF) in the USA on impact weighted publications in a 5-year window following the grant decision. One of their control variables is the average reviewer score of the proposal, ranging from 1 (excellent) to 5 (very poor). See also Chudnovskyet al. (2008) for a related analysis.

  3. On page 25, Jaffe sets up the equation and discusses the effect of public support on R&D output. However, on page 31, he makes it clear that the same selection problems apply when the dependent variable is total R&D expenditure; i.e., when estimating input additionality as we do in this paper. See also his equation (2b) on page 32.

  4. In the future, data that identify which applications were competing directly against each other for funding at a certain point in time will become available. It is then likely that we will observe a more clear quality cut-off in the grant awarding process and be able to use a regression discontinuity approach.

  5. This is industry led R&D or “user directed innovation programs” (BIP) in the terminology of the Research Council of Norway. See http://www.rcn.no/en/Research_programmes/1184159006970.

  6. There are altogether 142 programs in our dataset. 50 % of the firms have received support from more than one program, and 8 percent have received support from more than four programs. It should be noted, however, that the definition of a program, and in particular what constitutes a new program, is not fully consistent in our data. In some cases, we see a program existing for several years, awarding new grants each year. In other cases, programs award support for only 1 year, but a new program with a similar name is established the next year and so on. Most programs are relatively small in terms of number of applications and the amount awarded, even if we group together programs that we assume are similar.

  7. The rules for how to calculate the co-financing were perceived to be rather lenient in the period we have data for. The correct number may therefore be closer to 50 %.

  8. We exclude firms that have never performed R&D because these firms are not eligible for R&D subsidies. However, up front, all firms are eligible to apply for subsidies. Among the excluded firms that are never observed with positive R&D there are 52 firms that applied for subsidies and had their application rejected. It would be possible to include the observations from these firms in the regressions reported in Table 8, and we have checked that our results are robust with respect to this decision.

  9. We have also tried using the maximum grade instead of the mean. This did not change results materially.

  10. Under the assumptions of the classical errors-in-variables model and if the two measures are of equal quality, this correlation—known as the reliability ratio—measures the fraction of the variance in reported subsidies that is due to true variation in subsidies. See, e.g., Ashenfelter and Krueger (1994) or Bound et al. (2001).

  11. Some measurement errors could probably be avoided by pooling subsidies from different sources, but then we could not estimate the degree of additionality associated with each specific source. The degree of additionality is likely to vary between sources, e.g. because some public financing is given as matching grants subsidies and some as contract R&D.

  12. See Finne (2011) for an assessment of the accuracy of the Norwegian R&D survey.

  13. Note, however, that programmes may be anticipated and that the launching of programmes may be correlated with technological opportunities.

  14. An exception is Lerner (1999) who explicitly notes that the firms were of very different sizes and that a heteroskedasticity problem potentially existed. Because of this, he divides the firms into groups on the basis of sales and calculated heteroskedastic-consistent standard error. Bronzini and Iachini (2011) scale all variables with sales.

  15. Formal tests show that this procedure works very well compared both to no weighting and to scaling all variables with sales. We combine Park’s procedure with estimating heteroskedasticity robust standard errors, hence eliminating all heteroskedasticity is not imperative.

  16. Firm fixed effects will hopefully absorb most of the unobserved effects associated with all grants, but it is possible that in particular EU subsidies, being highly competitive, are associated with unobserved time-varying changes in research quality and intentions that we are not able to control for, cf. Table 8. Hence, the coefficients on EU subsidies should be interpreted with caution.

  17. The large number of zeros in intramural R&D and subsidies presents a specification problem. We use the approximation that ln(z) = 0 if z = 0, where z is a variable measured in 1,000 real NOK.

  18. This number is based on mean intramural R&D for the 727 firms that receive subsidies and can be calculated using the share of subsidies in intramural R&D given in Table 6.

  19. This is probably the case for most additionality analyses, and is obviously the only source of identification for the many studies that rely on a dummy for whether firms receive subsidies or not.

  20. This is when the model is estimated with firm fixed effects. For pooled OLS, the firm fixed effects α i also needs to be uncorrelated with ω it conditional on the proxy.

  21. Kauko (1996) suggests that controlling for applications filed will solve the endogeneity problem. This, however, is only true to the extent that the firms' own evaluation of the R&D projects is not affected by the outcome of the application.

  22. This is seen by dividing 3,537 by 2,262.

  23. See, e.g., David et al. (2000, Sect. 2.6), for a discussion.

  24. Another situation that will take us outside the classical measurement error model is if total subsidies are measured correctly, but distributed erroneously on the three subsidy variables included in our regression.

References

  • Angrist, J. D., & Pischke, J.-S. (2009). Mostly harmless econometrics: An empiricist’s companion. New Jersey: Princeton University Press.

    Google Scholar 

  • Arora, A., & Gambardella, A. (2005). The impact of NSF support on basic research in economics. Annales d’Economie et des Statistiques, 79–80, 91–117.

    Google Scholar 

  • Ashenfelter, O., & Krueger, A. (1994). Estimates of the economic return of schooling from a new sample of twins. American Economic Review, 84(5), 1157–1173.

    Google Scholar 

  • Benavente, J. M., Crespi, G., & Maffioli, A. (2012). The impact of national research funds: A regression discontinuity approach to the Chilean FONDECYT. Research Policy, 41(8), 1461–1475.

    Article  Google Scholar 

  • Bound, J., Brown, C., & Mathiowetz, N. (2001). Measurement error in survey data Ch. 59. In J. Heckman & E. Leamer (Eds.), Handbook of econometrics, 5(5) (pp. 3705–3843). Amsterdam: Elsevier Science.

    Google Scholar 

  • Bronzini, R. & Iachini, E. (2011). Are incentives for R&D effective? Evidence from a regression discontinuity approach, Bank of Italy Working Papers 791.

  • Cerulli, G. (2010). Modelling and measuring the effect of public subsidies on business R&D: A critical review of the econometric literature. Economic Record, 86(274), 421–449.

    Article  Google Scholar 

  • Chudnovsky, D., López, A., Rossi, M. A., & Ubfal, D. (2008). Money for science? The impact of research grants on academic output. Fiscal Studies, 29(1), 75–87.

    Article  Google Scholar 

  • David, P. A., Hall, B. H., & Toole, A. A. (2000). Is public R&D a complement or a substitute for private R&D? A review of the econometric evidence. Research Policy, 29(4–5), 497–529.

    Article  Google Scholar 

  • Finne, H. (2011). Is R&D in the business enterprise sector in Norway under-reported?. Report: SINTEF Research. A20772.

    Google Scholar 

  • Garcia-Quevedo, J. (2004). Do public subsidies complement business R&D? A meta-analysis of the econometric evidence. Kyklos, 57, 87–102.

    Article  Google Scholar 

  • Griliches, Z., & Hausman, J. (1986). Errors in variables in panel data. Journal of Econometrics, 31(1), 93–118.

    Article  Google Scholar 

  • Henningsen, M. S., Hægeland, T. & Møen, J. (2012). Estimating the additionality of R&D subsidies using proposal evaluation data to control for firms’ R&D intentions. Statistics Norway Discussion paper No. 729.

  • Imbens, G. W., & Lemieux, T. (2008). Regression discontinuity designs: A guide to practice. Journal of Econometrics, 142(2), 615–635.

    Article  Google Scholar 

  • Jacob, B., & Lefgren, L. (2011). The impact of research grant funding on scientific productivity. Journal of Public Economics, 95(9–10), 1168–1177.

    Article  Google Scholar 

  • Jaffe, A. B. (2002). Building programme evaluation into the design of public research-support programmes. Oxford Review of Economic Policy, 18(1), 22–34.

    Article  Google Scholar 

  • Kauko, K. (1996). Effectiveness of R&D subsidies—a skeptical note on the empirical literature. Research Policy, 25(3), 321–323.

    Article  Google Scholar 

  • Klette, T. J., & Møen, J. (2012). R&D investment responses to R&D subsidies: A theoretical analysis and a microeconometric study. World Review of Science Technology and Sustainable Development, 9(2/3/4), 169–203.

    Article  Google Scholar 

  • Klette, T. J., Møen, J., & Griliches, Z. (2000). Do subsidies to commercial R&D reduce market failures? Microeconometric evaluation studies. Research Policy, 29(4–5), 471–495.

    Article  Google Scholar 

  • Lach, S. (2002). Do R&D subsidies stimulate or displace private R&D? Evidence from Israel. Journal of Industrial Economics, 50(4), 369–390.

    Article  Google Scholar 

  • Lerner, J. (1999). The government as venture capitalist: The long run impact of the SBIR program. Journal of Business, 72(3), 285–318.

    Article  Google Scholar 

  • Lichtenberg, F. R. (1984). The relationship between federal contract R&D and company R&D. American Economic Review, 74(2), 73–78.

    Google Scholar 

  • Park, R. E. (1966). Estimation with heteroscedastic error terms. Econometrica, 34(4), 888.

    Article  Google Scholar 

  • Serrano-Velarde, N. (2008). Crowding-out at the top: The heterogeneous impact of R&D subsidies on firm investment. Job market paper: European University Institute.

    Google Scholar 

  • Wallsten, S. J. (2000). The effects of government-industry R&D programs on private R&D: The case of the small business innovation research program. RAND Journal of Economics, 31(1), 82–100.

    Article  Google Scholar 

Download references

Acknowledgments

We have benefited from comments by Tore Ellingsen, Frank Foyn, Carl Gjersem, Svein Olav Nås, Arvid Raknerud and participants at the workshop on R&D Policy Evaluation at the Ministry for Higher Education and Research in Paris in November 2011. The project is financed by the Research Council of Norway

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jarle Møen.

Appendix

Appendix

Table 13 Assessment criteria for the proposals

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Henningsen, M.S., Hægeland, T. & Møen, J. Estimating the additionality of R&D subsidies using proposal evaluation data to control for research intentions. J Technol Transf 40, 227–251 (2015). https://doi.org/10.1007/s10961-014-9337-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10961-014-9337-z

Keywords

JEL Classification

Navigation