Skip to main content
Advertisement
  • Loading metrics

Evaluating systemic innovation and transition programmes: Towards a culture of learning

Abstract

Innovation systems and transitions thinking have become increasingly pervasive in what has been labelled a third generation of challenge-led innovation and transitions policy programmes. Although this upsurge is cause for celebration, we argue that the challenge now lies in developing the evaluation methods that allow for the assessment of strong and weak programmes, to foster a culture of learning and to maintain momentum. Therefore, in this brief review, we take stock of existing approaches and identify 6 tentative categories of evaluations, which we map out on the axis of formative and summative evaluation. Combining summative evaluations with more frequent formative evaluations may create the environment for rapid learning and policy adaptation necessary to prevent the current rise of systemic innovation and transition programmes from being short lived.

Partially due to prominent societal and policy developments like the Paris Climate Agreement, the Green Deal, and the European Union (EU)’s recent attention for societal missions, there is a rise of innovation and transition policy strategies dedicated to addressing ambitious societal goals [1]. Some of these strategies take a systemic approach by focusing on a broad range of technological, sociological, and economic factors that influence the possibilities for societally desirable innovations to become successful [2]. This not only goes beyond implementing individual interventions targeting particular system bottlenecks [3], but also extends past so-called “systemic instruments” that address various actors and multiple system functions at once [4]. Instead, systemic approaches concern overarching policy strategies for continuously monitoring and addressing problems that might occur when systems develop and transitions unfold [5,6].

We, thus, understand systemic innovation and transition programmes as integrated strategies that aim—through coordination structures and activities—to achieve alignment between a set of policy instruments that target different parts of the system (e.g., its systemic problems as suggested by [79]) in which it aims to intervene, thereby jointly creating conditions conducive to achieving a so far unmet societal want or need. Examples include the Dutch Topsector approach for strengthening the competitiveness of 9 technoeconomic domains [10] and the Flanders Circular Strategy for addressing barriers that hamper the emergence of a circular economy [11].

While the urgency and legitimacy of such policies mainly stem from societal developments, this most recent “transformative” innovation policy paradigm [9,12] also draws inspiration from the literature on sustainability transitions. Besides regarding this as flattering to the transitions research community, we stress that this brings responsibilities regarding delivering concrete advice on how to evaluate systemic innovation and transition programmes. If transition thinking is increasingly influencing policy making, it is inevitable that this will also lead to questions on how to determine policy success. A crucial issue in the next era of transition research will most likely be how to distinguish good programmes from bad ones, to prevent bad programmes from undermining further spread of transition-inspired policies (and make the current upsurge short lived). With systemic innovation and transition programmes gaining popularity, it becomes all the more urgent to develop evaluation approaches that allow us to tell whether and how to improve or terminate unsuccessful policies.

This is a challenging endeavour. Indeed, in a recent survey of the literature on transformative innovation policy, Haddad and colleagues [13] found that transitions- and mission-oriented innovation policy scholars have argued that evaluating systemic innovation and transition programmes requires evaluators to not only assess programmes against a broader set of relevant impacts and system-level transformative outcomes, but also account for interactions between instruments, engage (and manage conflicts between) a broader set of stakeholders, and achieve coordination between different scientific and technological fields, policy domains, and sectors (see also Amanatidou and colleagues [14] and Luederitz and colleagues [15] for an earlier discussion on the same topic). Moreover, evaluators should recognise that tracing causal mechanisms in complex and inert sociotechnical systems is difficult [10] and that policies can generate unanticipated system dynamics [16].

So far, however, the debate on assessment of systemic programmes has remained rather fragmented—in terms of communities (innovation scholars, transition scholars, policy agencies, auditors, and evaluators) as well as content. Several attempts have been made recently to develop new analytical approaches to evaluation [15,17,18], but often without referring explicitly to other similar frameworks and methods. As a result, it is difficult to get a good overview of how different concepts and frameworks complement or substitute each other. We therefore argue that it’s time to have a more extensive, structured, and creative debate on how to assess systemic innovation and transition programmes. To move forward in this endeavour, the model in Fig 1 provides some guidance by identifying 6 tentative categories of evaluations (A to F).

thumbnail
Fig 1.

Generic theory of change for systemic innovation and transition programmes, with evaluation categories A to F.

https://doi.org/10.1371/journal.pstr.0000008.g001

We map these categories on an axis ranging from formative evaluations, which can help us to understand why and how policies work (or not), to summative evaluations, which serve to attribute observed outcomes to policy effects. The latter allow for uncovering causal mechanisms, which also can feed into more refined summative evaluations as it sheds light on what (intermediate) outcomes to measure. In light of recent suggestions to focus on the complementarities between accountability and reflexivity (e.g., [19]), it seems advisable to combine summative evaluations with more frequent formative evaluations suitable for rapid learning and policy adaptation.

For both types of evaluations, a coherent assessment scheme for comparative analysis of systemic programmes would be useful—whether for drawing inspiration from elsewhere (in formative studies) or for obtaining a counterfactual (in summative studies).

For formative evaluation, there is an ongoing but poorly converging search for theoretical frameworks for assessing systemic programmes. This emerging literature mainly addresses 4 of the categories in Fig 1 in that they present frameworks for assessing (A) the presence of a legitimate intervention rationale, such as system or transition failures [9]; (B) the quality of governance processes and structures [20] or policy mixes [2123], e.g., in terms of consistency and coherence; (C) the match between policy interventions (within and across policy domains) and identified system weaknesses or bottlenecks [10,24,25]; and (D) improvements in the system’s performance, e.g., in the form of strengthened system functions [26] or transformation processes and outcomes [13,27]. Since these categories are complementary and all have the potential to affect policy impact, further research should focus on either reconciling different frameworks or providing clear guidance on when to use which one. One line of research is the use of novel methods like participatory tools that allow for engaging different stakeholders in the identification of impact processes [28] or the assessment of transformational policies [18]. This creates increasing demands on evaluators to mobilise and empower relevant stakeholders from different fields, sectors, and policy levels, coordinate and align potentially divergent perspectives on problems and possible solutions (cf [29]), and manage conflicts of interests [9].

Regarding summative evaluation, the good news is that some systemic programmes have measurable end goals, like an amount of CO2 emissions reduction (category F in Fig 1). The bad news is that ascribing actual societal impacts to developments in the underlying system is notoriously difficult and understudied [30]. Rigorous attribution-oriented evaluation approaches might therefore focus on intermediate goals and examine policy effects—including the effects of interacting policy instruments [7]—in the form of relevant structural changes (such as new firms or networks in a system—category E) or underlying innovation and transition processes (e.g., via network analysis techniques for studying knowledge sharing in the system—category D). A particularly salient feature of systemic policies, in this respect, is that they might aim to shift the system’s focus to a particular societal problem or set of associated solutions. Evaluation of the effects of providing such solution directionality would require methods for assessing the momentum of investments in prioritised solutions, vis-à-vis developments in other solutions. We therefore encourage scholars to explore new data collection and analysis methods, like semantic techniques using project descriptions for gauging whether supposedly improved systems evoke activities that are in line with current priorities. Some inspiration can be found in recent attempts to map research and development (R&D) projects on Sustainable Development Goals (SDGs) [31] or societal missions [32].

To conclude, we emphasise that the adoption of transition thinking in emerging systemic policy approaches is merely an initial success. Real victory will only come from obtaining convincing evidence of the actual quality and impact of such policies, which can be used as a basis for policy learning and enhancing their social accountability.

This calls for further development of evaluation approaches for systemic innovation and transition programmes, as well as establishing a learning culture that is open to different ways of thinking about policy assessment. Probably the most fundamental shift concerns a deviation from evaluations of programmes in which there is a clear view from the outset of the linkages between policy inputs and envisaged impacts (e.g., in the form of a Theory of Change [33]) to evaluations that have to start with uncovering the very change mechanisms through which policy interventions might stimulate (or hamper) transformations of complex sociotechnical systems. In fact, in the case of wicked challenges, one of the purposes of systemic programmes can be to facilitate societal learning regarding which systemic problems prevent progress towards an unmet societal demand and how these problems can then be tackled [34]. However, while this type of learning requires policy experimentation, it does not follow that systemic programmes should not be held responsible for policy inefficiency or for lacking impacts. On the contrary, all evaluations are, in the end, supposed to shed light on policy effects and how they accumulate and interact. The 6 evaluation categories proposed above are intended as a first basis for collecting complementary pieces of evidence in this regard. Further research is needed to validate and possibly extend the model as well as to identify and select appropriate empirical methods (including also less conventional alternatives like outcome harvesting [35] or process tracing [36]) and reveal what lessons can be drawn from deploying and combining them.

References

  1. 1. Kuhlmann S, Rip A. Next-Generation Innovation Policy and Grand Challenges. Sci Public Policy. 2018:1–7.
  2. 2. Hekkert MP, Janssen MJ, Wesseling JH, Negro SO. Mission-oriented innovation systems. Environ Innov Soc Trans. 2020;34:76–9.
  3. 3. Borrás S, Laatsit M. Towards system oriented innovation policy evaluation? Evidence from EU28 member states. Res Policy. 2019;48:312–21.
  4. 4. Smits R, Kuhlmann S. The rise of systemic instruments in innovation policy. Int J Foresight Innov Policy. 2004;1:4.
  5. 5. Arnold E, Åström T, Glass C, de Scalzi M. How should we evaluate complex programmes for innovation and socio-technical transitions? 2018.
  6. 6. Arnold E. Evaluating research and innovation policy: a systems world needs systems evaluations. Res Eval. 2004;13:3–17.
  7. 7. Bergek A, Jacobsson S, Hekkert M, Smith K. Functionality of innovation systems as a rationale and guide in innovation policy. In: Smits RE, Kuhlmann S, Shapira P, editors. The Theory and Practice of Innovation Policy. Cheltenham: Edward Elgar; 2010. p. 117–146.
  8. 8. Klein Woolthuis R, Lankhuizen M, Gilsing V. A system failure framework for innovation policy design. Technovation. 2005;25:609–19.
  9. 9. Weber KM, Rohracher H. Legitimizing research, technology and innovation policies for transformative change: Combining insights from innovation systems and multi-level perspective in a comprehensive “failures” framework. Res Policy. 2012;41:1037–47.
  10. 10. Janssen MJ. What bangs for your buck? Assessing the design and impact of Dutch transformative policy. Technol Forecast Soc Change. 2019;138:78–94.
  11. 11. Alaerts L, Van Acker K, Rousseau S, De Jaeger S, Moraga G, Dewulf J, et al. Towards a circular economy monitor for Flanders: a conceptual basis. OVAM. 2018.
  12. 12. Fagerberg J. Mobilizing innovation for sustainability transitions: A comment on transformative innovation policy. Res Policy. 2018:1–9.
  13. 13. Haddad C, Nakić V, Bergek A, Hellsmark H. The policymaking process of transformative innovation policy: a systematic review. 4th Int Conf. Public Policy. 2019:1–45.
  14. 14. Amanatidou E, Cunningham P, Gök A, Garefi I. Using Evaluation Research as a Means for Policy Analysis in a ‘New’ Mission-Oriented Policy Context. Minerva Dermatol. 2014;52:419–38.
  15. 15. Luederitz C, Schäpke N, Wiek A, Lang DJ, Bergmann M, Bos JJ, et al. Learning through evaluation–A tentative evaluative scheme for sustainability transition experiments. J Clean Prod. 2017;169:61–76.
  16. 16. Hoppmann J, Huenteler J, Girod B. Compulsive policy-making—The evolution of the German feed-in tariff system for solar photovoltaic power. Res Policy. 2014;43:1422–41.
  17. 17. Turnheim B, Berkhout F, Geels F, Hof A, McMeekin A, Nykvist B, et al. Evaluating sustainability transitions pathways: Bridging analytical approaches to address governance challenges. Glob Environ Chang. 2015;35:239–53.
  18. 18. Molas-Gallart J, Boni A, Schot J, Giachi S. A Formative Approach To the Evaluation of Transformative Innovation Policy. 2020. Available from: http://www.tipconsortium.net/.
  19. 19. Magro E, Wilson JR. Policy-mix evaluation: Governance challenges from new place-based innovation policies. Res Policy. 2019;48:103612.
  20. 20. Kroll H. How to evaluate innovation strategies with a transformative ambition? A proposal for a structured, process-based approach. Sci Public Policy. 2019;46:635–47.
  21. 21. Howlett M, Rayner J. Patching vs Packaging in Policy Formulation: Complementary Effects, Goodness of Fit, Degrees of Freedom, and Feasibility in Policy Portfolio Design. Polit Gov. 2013;1:170–82.
  22. 22. Howlett M, Rayner J. Design Principles for Policy Mixes: Cohesion and Coherence in ‘New Governance Arrangements.’ Polic Soc. 2007;26: 1–18.
  23. 23. Rogge KS, Pfluger B, Geels FW. Transformative policy mixes in socio-technical scenarios: The case of the low-carbon transition of the German electricity system (2010–2050). Technol Forecast Soc Change. 2020;151:119259.
  24. 24. van Mierlo B, Leeuwis C, Smits R, Woolthuis RK. Learning towards system innovation: Evaluating a systemic instrument. Technol Forecast Soc Change. 2010;77:318–34.
  25. 25. Wesseling JH, Meijerhof N. Developing and applying the Mission-oriented Innovation Systems (MIS) approach. Work Pap. 2021.
  26. 26. Bergek A, Jacobsson S, Carlsson B, Lindmark S, Rickne A. Analyzing the functional dynamics of technological innovation systems: A scheme of analysis. Res Policy. 2008;37:407–29.
  27. 27. Ghosh B. Transformative Outcomes: Assessing and Reorienting Experimentation with Transformative Innovation Policy. SSRN Electron J. 2020:739–56.
  28. 28. Fisher R, Chicot J, Domini A, Misojic M, Polt W, Turk A, et al. Mission-oriented research and innovation: Assessing the impact of a mission-oriented research and innovation approach. European Commission. 2018.
  29. 29. Wanzenböck I, Wesseling JH, Frenken K, Hekkert MP, Weber KM. A framework for mission-oriented innovation policy: Alternative pathways through the problem-solution space. Sci Public Policy. 2020;47.
  30. 30. Stern E, Stame N, Mayne J, Forss K, Davies R, Befan B. Broadening the range of designs and methods for impact evaluations. 2012.
  31. 31. Horne J, Recker M, Michelfelder I, Jay J, Kratzer J. Exploring entrepreneurship related to the sustainable development goals—mapping new venture activities with semi-automated content analysis. J Clean Prod. 2020;242:118052.
  32. 32. Mateos-Garcia JC. Mapping Research & Innovation Missions. 2019.
  33. 33. Connell JP, Kubisch AC, Schorr LB, Weiss CH. New Approaches to Evaluating Community Initiatives. Washington: Aspen Institute; 1995.
  34. 34. Bours SAMJV, Wanzenböck I, Frenken K. Small wins for grand challenges. A bottom-up governance approach to regional innovation policy. Eur Plan Stud. 2021:1–28.
  35. 35. Wilson-Grau R. Outcome Harvesting: Principles, Steps, and Evaluation Applications. Charlotte: IAP; 2018.
  36. 36. Schmitt J, Beach D. The contribution of process tracing to theory-based evaluations of complex aid instruments. Evaluation. 2015;21:429–47.