Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Strengthen causal models for better conservation outcomes for human well-being

  • Samantha H. Cheng ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Visualization, Writing – original draft, Writing – review & editing

    scheng@amnh.org

    Affiliations National Center for Ecological Analysis and Synthesis, University of California-Santa Barbara, Santa Barbara, CA, United States of America, Center for Biodiversity and Conservation, American Museum of Natural History, New York, CA, United States of America

  • Madeleine C. McKinnon,

    Roles Conceptualization, Data curation, Funding acquisition, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Bright Impact, San Francisco, CA, United States of America

  • Yuta J. Masuda,

    Roles Formal analysis, Writing – original draft, Writing – review & editing

    Affiliation Global Science, The Nature Conservancy, Seattle, WA, United States of America

  • Ruth Garside,

    Roles Data curation, Formal analysis, Writing – review & editing

    Affiliation European Centre for Environment and Human Health, Truro, England, United Kingdom

  • Kelly W. Jones,

    Roles Data curation, Writing – review & editing

    Affiliation Colorado State University, Fort Collins, CO, United States of America

  • Daniel C. Miller,

    Roles Writing – original draft, Writing – review & editing

    Affiliation University of Illinois, Urbana-Champaign, IL, United States of America

  • Andrew S. Pullin,

    Roles Writing – review & editing

    Affiliation Center for Evidence-based Conservation, Bangor University, Bangor, Wales, United Kingdom

  • William J. Sutherland,

    Roles Data curation, Writing – review & editing

    Affiliation Department of Zoology, University of Cambridge, Cambridge, England, United Kingdom

  • Caitlin Augustin,

    Roles Writing – review & editing

    Affiliation DataKind, New York, NY, United States of America

  • David A. Gill,

    Roles Visualization, Writing – review & editing

    Affiliations Moore Center for Science, Conservation International, Arlington, VA, United States of America, Environmental Science and Policy, George Mason University, Fairfax, Virginia, United States of America, Duke University Marine Laboratory, Nicholas School of the Environment, Duke University, Beaufort, North Carolina, United States of America

  • Supin Wongbusarakum,

    Roles Writing – review & editing

    Affiliation National Oceanic and Atmospheric Administration, Honolulu, HI, United States of America

  • David Wilkie

    Roles Conceptualization, Data curation, Methodology, Writing – review & editing

    Affiliation Wildlife Conservation Society, Bronx, NY, United States of America

Abstract

Background

Understanding how the conservation of nature can lead to improvement in human conditions is a research area with significant growth and attention. Progress towards effective conservation requires understanding mechanisms for achieving impact within complex social-ecological systems. Causal models are useful tools for defining plausible pathways from conservation actions to impacts on nature and people. Evaluating the potential of different strategies for delivering co-benefits for nature and people will require the use and testing of clear causal models that explicitly define the logic and assumptions behind cause and effect relationships.

Objectives and methods

In this study, we outline criteria for credible causal models and systematically evaluated their use in a broad base of literature (~1,000 peer-reviewed and grey literature articles from a published systematic evidence map) on links between nature-based conservation actions and human well-being impacts.

Results

Out of 1,027 publications identified, only ~20% of articles used any type of causal models to guide their work, and only 14 total articles fulfilled all criteria for credibility. Articles rarely tested the validity of models with empirical data.

Implications

Not using causal models risks poorly defined strategies, misunderstanding of potential mechanisms for affecting change, inefficient use of resources, and focusing on implausible efforts for achieving sustainability.

Introduction

Increasingly, nature conservation is seen as a viable global strategy for simultaneously improving human well-being and achieving environmental sustainability [1, 2]. These policies are predicated on the assumption that human well-being challenges can be addressed by maintaining or improving environmental conditions, particularly through the provisioning of natural resources and ecosystem services [3]. For example, nature conservation policies and practices include: protecting mangroves to reduce the impacts from tsunamis [4, 5]; urban tree planting to combat the negative effects of air pollution [6] and heat island effects [7]; and curbing schistosomiasis by reintroducing native river prawns [8]. But theory and evidence on whether, how, and to what extent these nature-based conservation interventions affect human well-being is relatively nascent [9], raising questions about the risks, requisite resources, and ultimately the role of conservation in achieving objectives across complex social-ecological systems. As such, in order to design and implement effective solutions, better and greater understanding of where, when, and which policies and actions lead to desired outcomes, is needed. In short, we need a clear understanding of the causality of nature conservation interventions in relation to intended outcomes for human well-being.

Designing effective conservation increasingly requires thinking about how interventions are situated within linked social and ecological systems where pathways are often interconnected and synergistic [10, 11]. Thus, in the face of complexity, there is a need for using more systems-based approaches that clearly articulate how components within social-ecological systems are connected [12]. This is especially important if we want to understand how changing one component can lead to cascading effects throughout a system, while also mitigating unintended consequences. Thus, achieving sustainability requires understanding complex patterns of cause and effect that are often not linear, but occur in feedback loops with multiple externalities and enabling conditions–particularly in the case of links between the ecosystems and well-being of people [13].

Across numerous disciplines, causal models have emerged as a critical tool for explicitly describing hypotheses of how cause and effect occur in complex systems. Causal models, as a whole, detail the logic and assumptions around how a series of interdependent steps will lead to intended outcomes. Causal models are not a new concept. In its simplest form, it is a type of hypothesis, but also often described as conceptual or theoretical models and frameworks. Examples include complex causation frameworks in political science [14], process models in engineering [15], structural model evaluations in business [16], and theories of change [17] and results chains [18] in development and conservation. In essence, these causal models all ask–how is an intervention or suite of interventions assumed to lead to desired outcomes? Based on underlying theory, these models explicitly describe mechanisms necessary to achieve goals, articulate assumptions, and clarify interdependencies between actions and objectives. Further, they provide a structured framework for defining and examining these relationships in both simulated and real-world contexts.

A causal model differs from two other types of models, methodological and conceptual models, which are commonly used to make sense of relationships and linkages within a system. A methodological model is used to test for causality. For example, difference in differences (DID) estimation (Yi = α + βTi + γti + δ (Ti · ti) + εi) is an example of a formulaic model, which is used to estimate the effect of a specific intervention or treatment (such as a passage of law, enactment of policy, or large-scale program implementation) by comparing the changes in outcomes over time between a population that is enrolled in a program (the intervention group) and a population that is not (the control group) [19, 20]. This method is used to mimic an experimental design by obtaining an appropriate counterfactual from observational data in order to estimate a causal effect. Conceptual models identify and characterize the existing conditions and key drivers that affect the current status of social or environmental variables within a system. They usually visually portray the relationships among the different factors within a situation analysis [21, 22]. We clarify here that our study speaks to causal models specifically, versus the methods used to test for causal relationships, or broader models which describe entire socio-ecological or political systems.

A significant body of research has strongly argued for the utility of using causal model diagrams [2325], citing, for instance, their usefulness for describing assumptions and hypotheses [26, 27], designing monitoring and evaluation plans [28], and explaining complex topics to lay audiences [29]. For example, clear articulation of the steps required to get from an intervention to a desired outcome can inform evaluation design by outlining key checkpoints and indicators to measure progress throughout a program life cycle [27]. Causal models are increasingly required by funders and used by implementing agencies and organizations interested in advancing sustainability goals. In the past decade, The Nature Conservancy [30], United States Environmental Protection Agency [31], Conservation International [32], Britain’s Department for International Development [27], the United States Agency for International Development [33], and others have emphasized the need for, and utility of, causal models. Using causal models can support critical thinking about how and why change can happen throughout a program life cycle, enabling more responsive and adaptive planning in complex situations. While the attention is welcome, little work has evaluated how these causal models are actually used in practice and research in conservation.

We aim to address this knowledge gap by examining over 1,000 scientific research articles on the linkages between conservation and human well-being outcomes from a systematic evidence map [34]. This corpus of literature is representative of extant approaches to evaluating the effect of conservation interventions, and thus can be illustrative of the extent and mode of application of causal models in the field.

Developing criteria for assessing causal models

Here, we take a broad, multi-disciplinary view of causal models and thus draw on available guidance and representative models from a diversity of sources ([17, 18, 21, 22, 27, 28, 35], including the Center for Theory of Change [36]) to develop three comprehensive criteria for assessing the credibility of employed causal models in conservation. We define a credible causal model as one that comprehensively articulates a causal pathway between actions, intermediate outputs, and a set of resultant outcomes, and is explicit about key assumptions and mechanisms between steps. Below, we explain each criterion and our rationale for their inclusion in our assessment rubric:

Criterion 1: Does it illustrate and describe a causal process of change? For example, description of models must describe a cause and effect relationship between an action X to outcome Y. It cannot simply indicate a link between two elements.

As this study focuses on causal models (versus methodological or conceptual models), explicit description of a cause and effect relationship–for example, identifying actions and outcomes–is required. Many of the causal models we surveyed to define these criteria often included unlinked elements, for example, to highlight important enabling conditions or different states. However, all models consistently emphasized the importance of clearly identifying which components were interlinked and the direction of that link (e.g. cause vs. effect).

Criterion 2: Does it clearly outline a comprehensive set of intermediary steps and/or necessary pre-conditions or factors for the long-term outcome(s) to be achieved?

Working in complex scenarios requires in-depth thinking around the different pathways through which change and impact can occur, as well as consideration of how individual contexts can influence performance of an intervention. Thus, explicit and comprehensive outlining of the steps required to get from an intervention to an outcome lends greater clarity around what we expect to happen (e.g. a hypothesis), supports our ability to test its validity [37], as well as determine factors that contribute to unexpected outcomes. This can provide more detailed and practical information around how to improve and adapt conservation interventions versus just knowing if something worked or not. For example, models meeting this criterion would describe the entire set of required enabling conditions. Where it is logically reasonable (i.e. depending on where along the causal pathway the study focuses), models meeting this criterion would also describe the entire hypothesized set of intermediate outputs that are required prior to achieving desired outcomes (e.g. chain of outcomes).

Criterion 3: Is the model explicit in outlining assumptions and hypotheses about how an action influences a series of intermediary outcomes that lead to desired outcomes? For example, models meeting this criterion would detail how doing X action will lead to Y outcome because of Z, assuming A, B, and C conditions hold true.

Model approaches we reviewed consistently emphasize the importance of transparent and sufficient articulation of assumptions about how and why interventions lead to desired outcomes. Assumptions are often framed in program theory and program design as the “things that we believe to be true” and reflect the beliefs and perspectives of whomever created the model in question [36]. They can be thought of as the process that leads from one change to another–for example, the theory that increased environmental education will lead to increase in pro-environmental behaviors is often based on the assumption that individuals make decisions based on information they receive from an educational program [38]. Knowing what these assumptions are helps facilitate more deliberate choices in intervention design depending on if they are context-appropriate. As there will always be multiple perspectives to how and why change will occur, clarity on assumptions is critical for readers appropriately interpret findings in their own contexts.

Methods

We explored if and how causal models are used in conservation by assessing a set of 1,027 articles derived from a previously published, peer-reviewed systematic map of evidence linking conservation effects to human well-being outcomes [34, 35]. Systematic maps are a thematic collections of empirical research studies and systematic reviews within a sector that maps the distribution and occurrence of existing evidence using a framework of policy-relevant interventions and outcomes [39]. Systematic maps are increasingly being employed in the environmental management and conservation sector to provide clear, synthesized assessments of where critical knowledge gaps exist to guide future research prioritization and illuminate areas of uncertainty [9]. This systematic map focused on non-OECD countries and included a broad range of interventions, study designs (ranging from non-experimental to experimental designs with quantitative and qualitative data), and human well-being outcomes (Table 1). Included articles were compiled using a Boolean search string to query peer-reviewed literature databases and grey literature sources per McKinnon et al. 2016. The systematic map was conducted following standards and guidelines from the Collaboration for Environmental Evidence.

thumbnail
Table 1. Scope covered in the conservation literature dataset used in this analysis (derived from McKinnon et al. 2016).

https://doi.org/10.1371/journal.pone.0230495.t001

Data coding strategy

We use this evidence base to draw additional inference on causal model occurrence and use by further examining included studies regard their use of causal models. All articles were screened and examined in four stages (Fig 1, Table 2). We screened articles for inclusion based on if they employed any type of conceptual or modular model to capture causal thinking (see Stage 1 below) (Table 2). Studies were screened by two reviewers, and conflicts were discussed and resolved, with a third reviewer if needed. Included studies were then coded using a standard data extraction questionnaire to capture model characteristics, credible causal model criteria (see Stage 2 below), and information on how the models are presented (see Stage 3) and how they are used within the context of the study (see Stage 4) (S1 Table, Table 3). Bibliographic, intervention type, outcome type, and study design type information were drawn from the original systematic map dataset.

thumbnail
Fig 1. Coding scheme for assessing causal models, credible causal models, and use.

https://doi.org/10.1371/journal.pone.0230495.g001

thumbnail
Table 3. Paired example of credible (bold) and non-credible causal models from the conservation and human well-being evidence base.

https://doi.org/10.1371/journal.pone.0230495.t003

All studies were systematically screened and coded using the standard data extraction questionnaire by a team of reviewers and results were cross-checked between at least two reviewers for consistency. Using the generalized linear model (glm) function in the ‘base’ package in R [41], we conducted a binomial regression to examine the impact of two independent variables (impact assessments and publication year) on the dependent variable (use of any causal models) until the model converged (~5000 iterations). As there were few studies with credible causal models (n = 14), we qualitatively describe the characteristics of these.

Results

We found that the vast majority of the examined evidence base neglects to document in any fashion (graphical, narrative, and/or formulaic), the underlying mechanisms about how the studied conservation intervention is predicted to affect human well-being outcomes. Only a fifth (18.1%; n = 186) of the evidence base employed any type of causal model (graphical, narrative, and/or formulaic) to frame the study, choose indicators and design analyses, and/or evaluate and interpret results (Fig 2). Only ~1% of the total dataset (n = 14) fulfilled all criteria for credibility (Fig 2, S1 Table, S1 File). Most of these studies were presented graphically (n = 10) while 3 were presented narratively only and 1 in formulaic notation. Examining all studies that employed some type of causal model, most (53.7%, n = 100) depicted their model graphically, while others solely described their model narratively (n = 72) or with formulaic representation (n = 12). (S1 Table). We found that most examined models failed criteria 2 and 3 for credibility (56.5% and 75.8% respectively).

thumbnail
Fig 2. Proportion of evidence base that met criteria for credible causal models.

https://doi.org/10.1371/journal.pone.0230495.g002

For each year after 1990, the odds of finding an article employing any type of causal model increased by ~3% (odds ratio = 1.099, p<0.01), indicating increased use over time. On the other hand, more empirically robust studies (i.e. those that attempted to evaluate effect size using a counterfactual), were not significantly more likely to employ a causal model at all (odds ratio = 0.788, p>0.1). Of the all the articles that measured impacts using a before/after or with/without comparison in the evidence base (n = 86)– 11 described any type of model, with only 2 of those fulfilling criteria 1–3 for credibility. This demonstrates that of the studies which we expect to clearly articulate how and why an intervention may lead to outcomes, the use of credible causal models (of any form) is remarkably low.

Of the 14 studies that employed credible causal models, all examined the impact of conservation interventions in forest ecosystems. Studies mostly focused on the impacts of alternative livelihood projects (n = 5), protected areas (n = 4) and resource management (n = 3). In terms of outcomes, almost all studies measured economic living standards (n = 13). Only two articles utilized an empirically robust study design (i.e. counterfactual). Most studies used causal models to frame the study, but rarely tested or analysed the relationships indicated in the models (S2 Table). Eight of the credible causal models were novel while others used previously established models or modifications thereof. For example, Salafsky et al. (2001) [42] used a previously published model [43] whereas Pegas et al. (2013) [44] modified an existing model [45] (S2 Table). Of the studies that used any type of model (credible or not) (n = 186), the majority either were insufficient (e.g. highlighted components, not pathways) or descriptive narratives (e.g. simply stating “this framework describes various assets and how they influence environment and human well-being”) that are vulnerable to subjective interpretation.

Discussion

Causal models are increasingly highlighted as a valuable tool for illustrating and understanding relationships and interactions between interventions, outcomes, and impacts; yet our analysis identified few documented models within a large, recent, and relevant evidence base. This gap might be pragmatic given publication constraints, or possibly more symptomatic of broader concerns around a deficit in critical thinking, or a lack of incentives for comprehensive reporting.

The lack of documented causal models may be due to a number of pragmatic factors, including journal constraints on content or length, lack of standards around consistent reporting, and/or low visibility of causal models (e.g. not explicitly stating that they were used and whether they may be described in the paper or supplementary materials). All of these factors may result in a low reporting of causal models, even if they were used. The multidisciplinary nature of conservation means standardized reporting of meta-data from evaluation studies and impact research frequently varies across fields [46]. This problem is not unique to conservation—recent studies highlight a concerted need for adopting meta-data standards (e.g. Dublin Core [47]) for ensuring that published research is easier to find for efforts such as systematic reviews and meta-analyses [48]. While efforts to standardize reporting in conservation, are growing, for example through efforts like establishing a common lexicon for conservation topics [40], these standards are not widely adopted by publishers and journals and the onus remains on individual authors to use them.

A lack of documented causal models may also partially reflect limitations of our evidence base [34]. Our strategy for compiling the evidence base was intended to be comprehensive (i.e. capturing the breadth of topics relevant to links between conservation and human well-being) but not exhaustive (i.e. not attempting to capture every extant published study). For example, while the methods used to generate the systematic map attempted to comprehensively capture evidence from grey literature (unpublished literature) sources, it recognizes that some sources and reports may have been missed (see [34]). While the evidence base interrogated was not intended to be exhaustive, we would still expect given the complexity of understanding linkages between conservation and well-being and the breadth of the topic at hand, that this topic would be a priority area for considering causal relationships.

We found that documented credible causal models tended to be presented graphically, occasionally complemented by narrative descriptions. For example, models were often depicted using boxes and arrows, flow diagrams, and/or a combination of visual graphics and text. Amongst the authors of this study, we found that this type of visual depiction was very useful for us to clearly understand the components and pathways that were being described and investigated. Overall, around half of the articles from the evidence base employing any type of conceptual model used a graphical depiction (n = 100). Using graphical depictions is particularly relevant for conservation—being a multidisciplinary space–as the terminology and language used to describe change pathways varies across disciplines, as well as language and cultural boundaries [46, 49]. Thus, non-narrative approaches to describing models could be potentially useful for facilitating broader understandings of pathways of change and system dynamics, and pattern discovery, across different disciplines and groups [23]. For example, visual graphics are often used to summarize and depict patterns and trends in data for broad communication of scientific findings, frameworks, and theories [50, 51]. Similarly, while formulaic representations are also useful for distilling relationships and linkages into an intuitive format, they can also be limited given it is more difficult to incorporate explicit details on assumptions, conditions, and linked pathways. Thus, visual depictions of how interventions can lead to desired outcomes, such as flow diagrams or matrices, can be more accessible to a broader audience [18, 23]. However, we do recognize that these depictions are not universally accessible, for example, for the visually impaired. Thus, in order to be useful, visual depictions should be accompanied by a detailed narrative description.

Conceptual models were often employed to illustrate frameworks for categorizing outcomes and sets of enabling conditions related to the studied intervention, as opposed to describing an explicit causal relationship (S1 Table). For example, a number of articles (e.g. [5255]) used frameworks to categorize different livelihoods assets/resources deriving from the Sustainable Livelihoods Framework [56] across different socio-economic groups within a conservation intervention (e.g. a protected area, species protection program).

Of the studies that did utilize credible causal models (n = 14), we find that the methodologies employed to test for causality were quite varied, and articles often did not employ robust methodologies ((i.e. using before/after and/or with/without counterfactuals to attribute observed outcomes to the presence of a conservation intervention) for either quantitative nor qualitative data. Conversely, out of all the articles in the evidence base that employed robust quantitative methodologies (n = 67 of 1,027 articles, McKinnon et al. 2016)–very few of them defined any type of causal model at all (4 out of 11 studies), much less a credible one (2 out of 11 studies).

Implications for conservation research and practice

There are three obvious risks or consequences in not using credible causal models (both graphical and in other forms) for conservation research, and, more broadly, decision-making.

First, without adequate explanation or theory of how interventions are likely to achieve results, we risk making assumptions that are, at best, unsupported and at worse, implausible [57, 58]. Thus, to test whether existing assumptions around causal relationships are valid, models must detail how and why activities are thought to lead to particular outcomes [59]. If the intent is to apply research insights to inform conservation practice, these assumptions and the underlying theory that supports them, need to be clearly articulated so as to understand whether study findings are reliable, much less applicable to different contexts.

Second, while we find many graphical depictions of causal models, a significant portion of the evidence base only described their models narratively. While narrative models are common, it can have implications for the extent of external validity of published research–as it constrains the ability of others to replicate studies or test specific hypotheses in different contexts. This study found graphical models complemented with a narrative description improved the ability of reviewers to understand and interpret the causal models in use. Graphical models are particularly helpful in providing a cross-cutting, intuitive framework to align planning across multiple disciplines, and contexts as well as cross-project learning and adaptation [18]. This is particularly critical as models should be interpretable across disciplinary, sectoral, and cultural boundaries in order to facilitate collaboration and communication in global initiatives to achieve sustainable landscapes at scale.

Finally, without clearly articulated models, it can be difficult to fully capture and understand relationships between variables in complex systems, including identifying where interdependencies, feedbacks, trade-offs, and unintended consequences may occur or appear. This can make it difficult to isolate factors that affect the magnitude, attribution, and timing of observed results, particularly when analysing empirical data on impacts. For example, without defining explicitly how we think X is connected to outcomes Y and Z, it will be difficult to appropriately test for this relationship, much less see when there are potential feedbacks between X and Y or trade-offs between Y and Z. Moreover, typically a number of intermediate outcomes must be in place in order to achieve longer-term outcomes that conservation aims for–for example, recovered ecosystem functions and decreased human poverty. Thus, without clear and detailed mapping of a hypothesized pathway to outcomes, it can be difficult to determine where problems occur and identify intervention points for adaptive management [18]. This is particularly important as the dynamic nature of conservation challenges demand adaptive, responsive science and policies [60].

These risks are particularly problematic for making progress towards evidence-informed conservation practice and policy and bridging the gap between science and action. While the use of causal models (such as theory of change and results chains) is becoming a standard of practice in conservation and development organizations [13, 27, 61], the formulation and updating of these models will need to be informed by the breadth of existing empirical evidence, which continues to grow exponentially, particularly in environmental disciplines [62]. Applying insights from this growing body of work to these models will remain difficult if it is not clear how individual findings are relevant or whether they are appropriate for the model in question [63].

Recently, guidelines for generating causal models for conservation are under development in order to facilitate a shared, cross-cutting evidence base with a common ontology across studies and disciplines, e.g. the Conservation Actions and Measures Library (CAML, http://cmp-openstandards.org/tools/caml/). Additionally, there has been a movement to develop and use reference causal models for sustainability by major funders and organizations (e.g. USAID, USFWS, Bridge Collaborative, Conservation Measures Partnership) to inform strategic priorities, activities, and evaluation of impact. These efforts can help practitioners and policymakers avoid repeating errors and for donors to compare across streams of work, using a common framework.

We recognize that not all research articles require causal models. However, the risks of not using a credible causal model in research that intends to evaluate causal impact of conservation, particularly for the purposes of informing conservation practice, are high. Thus, to facilitate progress in this area, we outline the following recommendations:

  • Promote use and reporting: We suggest that journals strongly encourage authors to include articulated causal models in submissions of empirical evaluations of interventions. Doing so will address low reporting of causal models and facilitate both greater use and transparency of models in literature.
  • Consider using visual, graphical depictions of causal models: We particularly encourage authors to consider communicating their models using graphical notations along with any other narrative or formulaic descriptions. Doing so will improve transparency and communication of articulated causal linkages and hypotheses around system connections.
  • Apply standards of practice: We encourage the conservation research and practice community to establish a minimum set of requirements for causal models to ensure transparency, replicability, credibility and integration into project design, funding requirements, and business processes. By doing so, we believe that this will help establish and foster the uptake of a new “gold standard” of practice. For example, for new proposed work, funders should require use of models and description of how project implementers will use the models to foster a standard of practice in making logical and well-supported value propositions. Among researchers and practitioners, we suggest that causal models should be more consistently integrated into project design and reporting. This is likely to require training and capacity-building.
  • Increase visibility and transparency of models: We encourage researchers and practitioners who are developing or have developed causal models to contribute to new or existing repositories in order to increase both model visibility and transparency. For example, the Conservation Actions and Measures Library hosts a repository for results chains and causal models that could be a good option to start. By making models openly available, this can make thinking more intelligible and explicit to a more diverse audience in an interdisciplinary context. We especially encourage increased visibility in order to help build an active community of practice around creating and validating causal models in conservation and sustainability writ large.

Achieving global sustainability requires developing sound interdisciplinary theories that will facilitate collaboration amongst diverse audiences and minimize misinterpretation. Consistent–and standardized—use of causal models can advance progress towards understanding how conservation impacts social-ecological systems by bringing subfields together to more holistically examine how impact occurs across entire systems.

Acknowledgments

We would like to thank the Science for Nature and People Partnership (SNAPP), a partnership of The Nature Conservancy, the Wildlife Conservation Society and the National Center for Ecological Analysis and Synthesis (NCEAS) at the University of California, Santa Barbara, for providing funding for the Evidence-Based Conservation working group. We specifically would like to thank all members of the SNAPP Evidence-Based Conservation working group for providing discussion, comments, feedback and support on developing this manuscript.

SHC, MCM, and DW conceived the ideas and designed methodology; SHC, DW, RG, WJS, KJW, and MCM collected the data; SHC, YM, and RG analysed the data; SHC, YM, MCM, and DCM led the writing of the manuscript. All authors contributed critically to the drafts and gave final approval for publication. We would like to thank the editors and reviewers who provided reflective feedback on this manuscript.

References

  1. 1. UN General Assembly, editor Transforming Our World: The 2030 Agenda for Sustainable Development, 21 October 2015, A/RES/70/12015.
  2. 2. Wood SL, DeClerck F. Ecosystems and human well-being in the Sustainable Development Goals. Frontiers in Ecology and the Environment. 2015;13:123–.
  3. 3. Adams WM, Aveling R, Brockington D, Dickson B, Elliott J, Hutton J, et al. Biodiversity conservation and the eradication of poverty. Science. 2004;306(5699):1146–9. pmid:15539593
  4. 4. Danielsen F, Sørensen MK, Olwig MF, Selvam V, Parish F, Burgess ND, et al. The Asian tsunami: A protective role for coastal vegetation. Science. 2005;310:643. pmid:16254180
  5. 5. Laso Bayas JC, Marohn C, Dercon G, Dewi S, Piepho HP, Joshi L, et al. Influence of coastal vegetation on the 2004 tsunami wave impact in west Aceh. Proceedings of the National Academy of Sciences. 2011;108:18612–7.
  6. 6. Nowak DJ, Heisler GM. Air Quality Effects of Urban Trees and Parks. National Recreation and Park Association Research Series. 2010.
  7. 7. Doick KJ, Peace A, Hutchings TR. The role of one large greenspace in mitigating London's nocturnal urban heat island. Science of The Total Environment. 2014;493:662–71. pmid:24995636
  8. 8. Sokolow SH, Huttinger E, Jouanard N, Hsieh MH, Lafferty KD, Kuris AM, et al. Reduced transmission of human schistosomiasis after restoration of a native river prawn that preys on the snail intermediate host. Proceedings of the National Academy of Sciences. 2015;112:9650–5.
  9. 9. McKinnon MC, Cheng SH, Garside R, Masuda YJ, Miller DC. Sustainability: Map the evidence. Nature. 2015;528:185–7. pmid:26659166
  10. 10. Berkes F, Turner NJ. Knowledge, Learning and the Evolution of Conservation Practice for Social-Ecological System Resilience. Human Ecology. 2006;34(4):479–94.
  11. 11. Miller BW, Caplow SC, Leslie PW. Feedbacks between Conservation and Social-Ecological Systems. Conservation Biology. 2012;26:218–27. pmid:22443128
  12. 12. Mahajan SL, Glew L, Rieder E, Ahmadia G, Darling E, Fox HE, et al. Systems thinking for planning and evaluating conservation interventions. Conservation Science and Practice. 2019. pmid:31915752
  13. 13. Qiu J, Game ET, Tallis H, Olander LP, Glew L, Kagan JS, et al. Evidence-Based Causal Chains for Linking Health, Development, and Conservation Actions. BioScience. 2018;68:182–93. pmid:29988312
  14. 14. Braumoeller BF. Causal Complexity and the Study of Politics. Political Analysis. 2003;11:209–33.
  15. 15. Scacchi W, Scacchi Walt. Process Models in Software Engineering. Encyclopedia of Software Engineering. Hoboken, NJ, USA: John Wiley & Sons, Inc.; 2002.
  16. 16. Fornell C, Larcker DF. Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics. Journal of Marketing Research. 1981;18:382.
  17. 17. Funnell SC, Rogers PJ. Purposeful program theory: Effective use of theories of change and logic models. 2011;31.
  18. 18. Margoluis R, Stem C, Swaminathan V, Brown M, Johnson A, Placci G. Results Chains: a Tool for Conservation Action Design, Management, and Evaluation. 2013;18.
  19. 19. Greene WH. Econometric analysis: Pearson Education India; 2003.
  20. 20. Costedoat S, Corbera E, Ezzine-de-Blas D, Honey-Roses J, Baylis K, Castillo-Santiago MA. How effective are biodiversity conservation payments in Mexico? PLoS One. 2015;10(3):e0119881. pmid:25807118
  21. 21. Conservation Measures Partnership. Open Standards for the Practice of Conservation, Version 3.0. Conservation Measures Partnership; 2013.
  22. 22. Foundations of Success. Using Conceptual Models to Document a Situation Analysis: An FOS How-To Guide. Bethesda, Maryland, USA: Foundations of Success; 2009.
  23. 23. Larkin JH, Simon HA. Why a Diagram is (Sometimes) Worth Ten Thousand Words. Cognitive Science. 1987;11:65–100.
  24. 24. Halpern JY. Appropriate Causal Models and the Stability of Causation. Review of Symbolic Logic. 2016;9:76–102.
  25. 25. Norton SB, Schofield KA. Conceptual model diagrams as evidence scaffolds for environmental assessment and management. Freshwater Science. 2017;36:231–9.
  26. 26. Grol MJ, Majdandzic J, Stephan KE, Verhagen L, Dijkerman HC, Bekkering H, et al. Parieto-Frontal Connectivity during Visually Guided Grasping. Journal of Neuroscience. 2007;27:11877–87. pmid:17978028
  27. 27. Vogel I. Review of the use of ‘Theory of Change’ in international development. American Journal of Evaluation. 2010;24:501–24.
  28. 28. Foundation K. W.K. Kellogg Foundation Logic Model Development Guide. Development. 2004:72.
  29. 29. Nesbit JC, Adesope OO. Learning With Concept and Knowledge Maps: A Meta-Analysis. Review of Educational Research. 2006;76:413–48.
  30. 30. Kareiva P, Groves C, Marvier M. The evolving linkage between conservation science and practice at the nature conservancy. Journal of Applied Ecology. 2014;51:1137–47. pmid:25641980
  31. 31. United States Environmental Protection Agency. Measuring the Effectiveness of the Ocean Dumping Management Program. 2012. Report No.: EPA-100-K-12-012.
  32. 32. International C. Constructing theories of change for ecosystem-based adaptation projects. A guidance document. Conservation International Arlignton, VA. 2013:23.
  33. 33. Development USAfI. Using Results Chains to Depict Theories of Change in USAID Biodiversity Programming: An USAID Biodiversity How-To Guide 2. 2016:1–30.
  34. 34. McKinnon MC, Cheng SH, Dupre S, Edmond J, Garside R, Glew L, et al. What are the effects of nature conservation on human well ‑ being? A systematic map of empirical evidence from developing countries. Environmental Evidence. 2016:1–25.
  35. 35. Bottrill M, Cheng S, Garside R, Wongbusarakum S, Roe D, Holland MB, et al. What are the impacts of nature conservation interventions on human well-being: a systematic map protocol. 2014:1–11.
  36. 36. Taplin DH, Clark H, Collins E, Colby DC. Technical papers: a series of papers to support development of theories of change based on practice in the field. 2013:23.
  37. 37. Margoluis R, Stem C, Salafsky N, Brown M. Using conceptual models as a planning and evaluation tool in conservation. Evaluation and Program Planning. 2009;32:138–47. pmid:19054560
  38. 38. Sawitri DR, Hadiyanto H, Hadi SP. Pro-environmental Behavior from a SocialCognitive Theory Perspective. Procedia Environmental Sciences. 2015;23:27–33.
  39. 39. James KL, Randall NP, Haddaway NR. A methodology for systematic mapping in environmental sciences. Environmental Evidence. 2016;5:7.
  40. 40. Salafsky N, Salzer D, Stattersfield AJ, Hilton-Taylor C, Neugarten R, Butchart SHM, et al. A standard lexicon for biodiversity conservation: Unified classifications of threats and actions. Conservation Biology. 2008;22:897–911. pmid:18544093
  41. 41. Team RC. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2017.
  42. 42. Salafsky N, Cauley H, Balachander G, Cordes B, Parks J, Margoluis C, et al. A Systematic Test of an Enterprise Strategy for Community-Based Biodiversity Conservation. Conservation Biology. 2001;15:1585–95.
  43. 43. Salafsky N, Wollenberg E. Linking livelihood and conservation: a conceptual framework and scale for assessing the integration of human needs and biodiversity. World Development. 2000;28:1421–38-.
  44. 44. Pegas FdV, Coghlan A, Stronza A, Rocha V. For love or for money? Investigating the impact of an ecotourism programme on local residents' assigned values towards sea turtles. Journal of Ecotourism. 2013;12:90–106.
  45. 45. Seymour E, Curtis A, Pannell D, Allan C, Roberts A. Understanding the role of assigned values in natural resource management. Australasian Journal of Environmental Management. 2010;17:142–53.
  46. 46. Westgate MJ, Haddaway NR, Cheng SH, McIntosh EJ, Marshall C, Lindenmayer DB. Software support for environmental evidence synthesis. Nat Ecol Evol. 2018;2(4):588–90. pmid:29572524
  47. 47. Arakaki FA, Santos PLVAdC, Alves RCV. Evolution of Dublin Core Metadata Standard: An Analysis of the Literature from 1995–2013. 2015. 2015:3.
  48. 48. Jiang G, Evans J, Endle CM, Solbrig HR, Chute CG. Using Semantic Web technologies for the generation of domain-specific templates to support clinical study metadata standards. J Biomed Semantics. 2016;7:10. pmid:26949508
  49. 49. Westgate MJ, Lindenmayer DB. The difficulties of systematic reviews. Conservation Biology. 2017;00:1–6.
  50. 50. Otten JJ, Cheng K, Drewnowski A. Infographics and public policy: using data visualization to convey complex information. Health Affairs. 2015;34:1901–7. pmid:26526248
  51. 51. Berinato S. Good Charts: The HBR Guide to Making Smarter, More Persuasive Data Visualizations. Harvard Business Review Press. 2016.
  52. 52. Greenheck FM. Environmental Conservation and Sustainable Livelihoods. An anthropological approach toward community-based custody and valuation of local resources in the context of marine turtle conservation in Costa Rica. Progress report 2007–2008 of the Community Livelihood Improvement Program (CLIP) to WWF Marine and Species Program for Latin America and the Caribbean. San Jose, Costa Rica: World Wildlife Fund; 2009.
  53. 53. Ha TTP, van Dijk H, Bosma R, Sinh LX. LIVELIHOOD CAPABILITIES AND PATHWAYS OF SHRIMP FARMERS IN THE MEKONG. Aquaculture Economics & Management. 2013;17(1):1–30.
  54. 54. Mahdi Shivakoti GP, Schmidt-Vogt D. Livelihood Change and Livelihood Sustainability in the Uplands of Lembang Subwatershed, West Sumatra, Indonesia, in a Changing Natural Resource Management Context. Environmental Management. 2009;43:84–99. pmid:18506516
  55. 55. Liang Y, Li S, Feldman MW, Daily GC. Does household composition matter? The impact of the Grain for Green Program on rural livelihoods in China. Ecological Economics. 2012;75:152–60.
  56. 56. Scoones I. Sustainable Livelihood Framework: A Framework for Analysis. 1998.
  57. 57. McKinnon MC, Mascia MB, Yang W, Turner WR, Bonham C. Impact evaluation to communicate and improve conservation non-governmental organization performance: the case of conservation international. Philosophical Transactions of the Royal Society B: Biological Sciences. 2015;370.
  58. 58. Agrawal A, Redford K. Poverty, Development, And Biodiversity Conservation: Shooting in the Dark? Wildlife Conservation Society Working Paper. 2006:1–50.
  59. 59. Rogers PJ. Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation. 2008;14:29–48.
  60. 60. Palmer MA. Socioenvironmental Sustainability and Actionable Science. BioScience. 2012;62:5–6.
  61. 61. Stem C, Margoluis R, Salafsky N, Brown M. Monitoring and Evaluation in Conservation: a Review of Trends and Approaches. Conservation Biology. 2005;19(2):295–309.
  62. 62. Li W, Zhao Y. Bibliometric analysis of global environmental assessment research in a 20-year period. Environmental Impact Assessment Review. 2015;50:158–66.
  63. 63. Sutherland WJ, Wordley CFR. Evidence complacency hampers conservation. Nature Ecology & Evolution. 2017.