Inconsistent recognition of uncertainty in studies of climate change impacts on forests

Background. Uncertainty about climate change impacts on forests can hinder mitigation and adaptation actions. Scientific enquiry typically involves assessments of uncertainties, yet different uncertainty components emerge in different studies. Consequently, inconsistent understanding of uncertainty among different climate impact studies (from the impact analysis to implementing solutions) can be an additional reason for delaying action. In this review we (a) expanded existing uncertainty assessment frameworks into one harmonised framework for characterizing uncertainty, (b) used this framework to identify and classify uncertainties in climate change impacts studies on forests, and (c) summarised the uncertainty assessment methods applied in those studies. Methods. We systematically reviewed climate change impact studies published between 1994 and 2016. We separated these studies into those generating information about climate change impacts on forests using models –‘modelling studies’, and those that used this information to design management actions—‘decision-making studies’. We classified uncertainty across three dimensions: nature, level, and location, which can be further categorised into specific uncertainty types. Results. We found that different uncertainties prevail in modelling versus decision-making studies. Epistemic uncertainty is the most common nature of uncertainty covered by both types of studies, whereas ambiguity plays a pronounced role only in decision-making studies. Modelling studies equally investigate all levels of uncertainty, whereas decision-making studies mainly address scenario uncertainty and recognised ignorance. Finally, the main location of uncertainty for both modelling and decision-making studies is within the driving forces—representing, e.g. socioeconomic or policy changes. The most frequently used methods to assess uncertainty are expert elicitation, sensitivity and scenario analysis, but a full suite of methods exists that seems currently underutilized. Discussion & Synthesis. The misalignment of uncertainty types addressed by modelling and decision-making studies may complicate adaptation actions early in the implementation pathway. Furthermore, these differences can be a potential barrier for communicating research findings to decision-makers.


Background
Despite overwhelming evidence about climate change impacts on natural and human systems (Cramer et al 2014), uncertainty about impacts is often perceived as one of the main challenges for taking action on climate change (Moser and Ekstrom 2010, Hanger et al 2013, Yousefpour and Hanewinkel 2016. In forest management, a key problem is that actions to maintain ecosystem functions under a changing climate need to be taken several decades earlier than their expected effect (Spittlehouse andStewart 2003, Millar et al 2007). Yet, uncertainties related to future forest growth, the occurrence of disturbances, and mortality complicate taking decisions about the most suitable adaption and mitigation measures to implement (O'Hara and Ramage 2013, Petr et al 2016, Seidl et al 2017, e.g. which tree species to plant. Furthermore, other drivers, such as future policies and societal demands for forest services, increase uncertainty about appropriate management options.
Therefore, understanding and embracing uncertainty is an important factor for successful climate change adaptation and mitigation  but a prevailing problem for many climate changerelated studies is how to grasp and report uncertainty in their findings. Uncertainty is context and domaindependent, which influences how different scientists recognise and deal with it (Bryant et al 2018). Moreover, the conceptualisation of uncertainty might differ between studies, leading to different understandings of what is meant by uncertainty or what is included in its quantification-and hence reported in scientific papers. For example, climate impact modelling studies aim to, among others, represent processes and generate information using computer models. In terms of uncertainty, modelling studies routinely quantify uncertainties related to the imperfect knowledge of the system under investigation (Uusitalo et al 2015, Gray 2017, Marchand et al 2018. On the other hand, studies exploring how users assess available information and use it to make long-term decisions (hereafter, 'decision-making' studies) (Schmolke et al 2010) more rarely quantify uncertainties. In particular, there is a lack of studies investigating uncertainty of stakeholder values or priorities about forest use. However, these can strongly influence how foresters design and apply adaptive management strategies (McDaniels et al 2012, Lawrence andMarzano 2014). Therefore, when quantifying individual components of the 'cascade of uncertainty' prevalent in climate impact studies (Jones 2000, Reyer 2013, its perception in the decision-making processes is often ignored (Petr et al 2014a, Radke et al 2017. This may be due on one hand to the large number of external drivers containing unpredictable factors, such as future stakeholders' needs and policy changes driven by stochastic human behaviour, that increase the complexity of decision-making studies. On the other hand, while many methods are available for estimating uncertainty in quantitative modelling, such as the 'Model-Independent Parameter Estimation and Uncertainty Analysis (PEST)' which constitutes an uncertainty analysis method for environmental modelling (Doherty 2015, http://pesthomepage.org/), a smaller number of techniques have been suggested for more qualitative decision-making studies. Also, some widely used uncertainty frameworks have been designed for classifying uncertainties in modelling studies , Refsgaard et al 2007, Kwakkel et al 2010, but to our knowledge only a few studies have tested and developed frameworks for decision-making studies (Ascough et al 2008, Petr et al 2014a. This imbalance might lead to substantially different types of uncertainties being covered by the different types of research. In this review, we address the lack of knowledge about which aspects of uncertainties prevail or are missing in modelling and decision-making studies in forest science, and how they differ in their understanding of uncertainty. To answer these questions, we developed a new multi-dimensional uncertainty framework, which we used to systematically classify uncertainties in modelling and decision-making studies published in the scientific literature. Finally, we summarized uncertainty assessment methods applied by those studies, to provide an overview of the methods at hand. Classifying uncertainty will not only allow to better recognise, quantify and communicate it Marchau 2003, Nicol et al 2019, van der Bles et al 2019) but also, and more fundamentally, help to understand where knowledge gaps are, or how much we know or do not know about a problem.

Conceptual framework 2.1. Uncertainty definitions
Uncertainty is a complex concept with multiple definitions , Refsgaard et al 2007, Ascough et al 2008. Consequently, the literature offers a broad range of meanings and interpretations of the term. Table 1 provides examples of existing definitions across different research fields, from general environmental science to forest ecology and management. These examples show an objective-subjective gradient from natural to decision-making research disciplines. Yet, in essence, uncertainty represents 'any departure from the unachievable ideal of complete determinism' , which is the broad definition we also adopt in this paper.

Dimensions and types of uncertainty
Beyond this simple definition, uncertainty can be categorised according to its dimensions or sources Rotmans 2002, Walker et al 2003). These dimensions refer to the different ways in which uncertainty can be understood, interpreted, and addressed. In their conceptual basis for uncertainty classification in model-based decision support systems,  defined three dimensions of uncertainty: location, level and nature. The location describes where in a method/model the uncertainty occurs, e.g. in parameters or driving forces (see table 2). The level describes the degree of knowledge available, ranging from the ideal state of complete knowledge (determinism) to the state of completely imperfect knowledge (total ignorance). Finally, the nature describes the reason for the lack of knowledge, either from imperfect information (epistemic) or from natural variability (stochastic). We expanded 's framework with additional uncertainty types, which relate more closely to decision-making processes. Specifically, we added the locations 'model selection', 'model implementation', 'information selection/decision' and 'type of information outputs' as well as the nature 'ambiguity' (after Kwakkel et al 2010). Table 2 presents each of the uncertainty types, their definition and an example. To ensure the relevance of our framework, we included each uncertainty type in the framework only if we could provide an example from the climate-forest nexus.

Uncertainty assessment methods
To understand how the different uncertainty dimensions and types can be assessed, we complemented our framework with existing methods for uncertainty assessment from Refsgaard et al (2007). These contain widely used quantitative methods such as scenario analysis or Monte Carlo analysis, but also more qualitative methods such as stakeholder involvement, see figure 1. All 15 uncertainty assessment methods are defined in table S1 is available online at stacks.iop.org/ ERL/14/113003/mmedia, with 'other' methods added to the list. We note that the uncertainty assessment methods by Refsgaard et al (2007), only consider 'sensitivity analysis' in general terms. Yet, there are differences between global and local sensitivity analysis with global being much more useful in assessing model/parameter uncertainty due to the consideration of nonlinear effects and parameter (hierarchical)

Uncertainty assessment framework
Based on previously published uncertainty assessment frameworks , Refsgaard et al 2007, Warmink et al 2010, we developed a novel framework to identify and classify uncertainties. Previous frameworks have provided a comprehensive overview of the multi-dimensionality of uncertainty including methods and application examples. However, they have not integrated modelling and decision-making perspectives into one coherent framework together with applicable uncertainty assessment methods. To that end, we compiled uncertainty dimensions and types (described in table 2) as well as existing methods for uncertainty assessment (table S1) into one uncertainty assessment framework. This final uncertainty assessment framework consisted of three dimensions of uncertainty (level, nature, location) further characterised by 17 uncertainty types and 15 assessment methods (figure 1).

Literature search and review
We conducted a systematic review of uncertainty related to climate change impact research in forest science, with a focus on modelling and decisionmaking studies. We used the Scopus database to search for published, peer-reviewed scientific papers in English. We used the search string ((climat * change) AND forest AND uncertain * AND model * ) for modelling studies, and replacing 'AND model * ' by 'AND management' AND 'behavior * OR attitude * OR polic * ' for decision-making studies. The search was carried out by researchers based in Edinburgh, UK. It yielded 1079 papers (78% modelling and 22% decision-making) published between 1994 and 2016. To minimise the bias towards modelling studies, we randomly selected 191 (i.e. 22%) modelling papers for 'The situation in which there is not a unique and complete understanding of the system to be managed' Ecology Decision-making (Brugnach et al 2008) 'Large differences in the simplifying assumptions and parameter choices made in models' Forest ecology Modelling (Cheaib et al 2012) a Denotes the main uncertainty definition used in this paper.
further abstract scrutiny. After examining the abstracts of all papers, we ended up with 69 modelling and 31 decision-making papers for further analysis.
For each paper we recorded the following attributes: author(s), year of publication, type of paper (primary research, review, other), spatial coverage (local, regional, multi-country, continental, global), and study area (country). We classified each paper, into one of nine categories of research topics (carbon balance, conservation/restoration, fire/drought/ pests, forest management planning, forest dynamics, forest policy, mortality, species distribution, and others). Only for decision-making papers, we recorded information about the management stage that was studied (operational & tactical, strategic & organisational, and/or policy-making) (Oesten and Roeder 2012, table S2).
We thoroughly reviewed each paper using our uncertainty framework and captured all types of uncertainty (nature, level, location, and their unique combinations) identified therein, as well as the uncertainty assessment methods used for each entry. If the same combination of uncertainty types was addressed with the same method, we only recorded the first one reported. Hence, out of the 69 modelling and 31 decision-making papers, we extracted 139 and 65 unique combinations of uncertainty types (table S3). We only recorded uncertainties related to the actual research carried out within the papers. As the reviewing task was shared among coauthors, we tried to reduce subjectivity in classifying uncertainty types by having a cross-check of all entries by the main author.

Analysis
First, we derived summary statistics for the publication year, study area, spatial coverage, and research topic. Second, we counted the number of papers addressing each type of uncertainty, and tested whether the reporting frequency of uncertainty natures and levels differed between modelling and decision-making papers (Chi-square test). We did not compare locations, because these uncertainty types largely varied between studies. Next, we compared the frequency of unique combinations of nature x location and level x location between modelling and decision-making studies, as well as the frequency of uncertainty natures and levels across different stages of management (decision-making papers only). Finally, we identified the most frequently used uncertainty assessment methods for each nature and level of uncertainty. Our analyses were conducted using the R language and environment for statistical computing (R Core Team 2018), especially the tidyverse package (Wickham 2017).

Summary of reviewed papers
Out of the 69 modelling and 31 decision-making papers, the majority were published after 2000 and 2004 respectively. Only three papers addressed uncertainty from both the modelling and decision-making perspectives. The studies covered all continents, with a prevalence of North American (41%) and European (27%) studies. A large proportion of studies focused on estimating carbon stocks and fluxes (25% of modelling and 1% of decision-making), followed by risks of fire, drought, and pests (10% and 7%), and forest management (4% and 11%). The latter two topics were the most frequent in decision-making studies. The dominant spatial scales were regional and local, representing 52% and 27% of all studies. However, modelling studies covered a wider range of spatial scales including global and continental-scale studies.

Uncertainty nature and level
When comparing unique combinations of uncertainty types addressed by modelling and decision-making studies, we found significant differences (p<0.05) across both nature and level (figure 2). Epistemic uncertainty was the most frequent uncertainty nature covered in both groups of studies, representing 86% of modelling and 57% of decision-making entries. Ambiguity was relevant only for decision-making entries (32%). For the uncertainty level, the modelling entries were rather equally distributed with the highest proportion associated to scenario uncertainty (35%); in decision-making studies, the most represented uncertainty level was recognised ignorance (35%) followed by scenario uncertainty (26%).
Considering a classification across both level and nature, we found a similar pattern for modelling and decision-making studies, except for ambiguity (figure 2). Modelling studies addressed epistemic uncertainty equally across all three levels of uncertainty. Stochastic uncertainty was only treated in combination with statistical and scenario uncertainty, whereas ambiguity was equally associated to all three uncertainty levels. In decision-making studies, a large proportion of epistemic uncertainty could not be associated to any level ('not available' in figure 2). Most entries dealing with ambiguity were combined with assessments of scenario uncertainty, while stochastic uncertainty combined equally with all uncertainty levels.

Uncertainty location
The main locations addressed by modellers were 'model parameters' (26%), 'inputs-driving forces' (23%), and 'model outputs' (18%). For these three locations, the most frequent nature of uncertainty was scenario (for inputs-driving forces) or statistical (for model parameters and outputs) ( figure 3). Still, a nonnegligible number of entries reported on 'recognised ignorance' for locations such as model structure (67% of the respective entries), model parameters (39%) and inputs-system data (33%). Very rarely did modelling studies report uncertainty in 'model implementation' (1%). For modelling studies, epistemic uncertainty was the preferred way to characterize all uncertainty locations. Ambiguity, on the contrary, appeared only at four locations. Decision-making papers mainly addressed 'inputsdriving forces' (35% of entries) and 'information selection or decision' (26%). Epistemic uncertainty was the preferred way to characterize all locations. Regarding combinations of location and level, 'inputs' and 'context and framing' were never associated to statistical uncertainty, which instead was sometimes used to characterize uncertainty in 'model outputs' (13% of entries) and 'information selection' (12%). Recognised ignorance was the most frequent uncertainty level for all uncertainty locations.

Uncertainty types represented at different management stages
The entries from the decision-making papers mainly represented the 'Operational' management level (57%), followed by 'strategic & organisational' (20%), and 'policy-making' stages (19%). Operational, strategic and  policy analyses were mostly linked to epistemic uncertainty ( figure 4). The entries dealing with operational and strategic management were rather evenly distributed amongst levels compared to statistical uncertainty, while policy-making studies were mostly associated to recognised ignorance.

Methods for uncertainty assessment
Distinct uncertainty assessment methods were used in modelling and decision-making studies. In fact, only three methods were used in both groups of papers: expert elicitation, scenario analysis, and sensitivity analysis (figure 5). Among these, only scenario analysis was used for assessing stochastic uncertainty, while all three were used in case of epistemic uncertainty and ambiguity. Overall, a large suite of uncertainty assessment methods (10) was used in modelling studies to analyse epistemic uncertainty, five for ambiguity, and four for stochastic uncertainty. In decision-making studies, epistemic uncertainty was analysed using six methods in total, ambiguity using four, and stochastic uncertainty using three methods. All levels of uncertainty were analysed by an equal number of methods overall (nine). In modelling studies, the widest range of methods was used for statistical uncertainty, followed by recognised ignorance and scenario uncertainty. In decision-making studies, scenario uncertainty was associated to twice the number of methods (six) as were statistical uncertainty and recognized ignorance (three each). Scenario analysis, Monte Carlo analysis, and multiple model simulations were the most versatile methods, being applied at least once for every uncertainty level and nature. Finally, five methods were applied to only one uncertainty type, e.g. exploratory modelling or error propagation equations.

Discussion
Our review of the scientific literature on climate change impact and adaptation in forests showed a multi-dimensional understanding of uncertainty, which was described by different natures, levels, and locations. Acknowledging this multi-dimensionality can be crucial for understanding knowledge gaps in modelling future climate impacts on forests, or analyzing the decision-making process of forest stakeholders under climate change. Moreover, understanding the different dimensions of uncertainty can help modellers and decision-making scientists to identify what types of uncertainty exist, how to communicate them, and what would be necessary to reduce them, if possible.
We have used the example of climate impacts on forests but our framework is also useful for other areas of climate impact science. The types of models used to simulate climate impacts on forests and the types of methods to assess uncertainties as well as our conceptualisation of uncertainty are very similar to those used in hydrology (Kundzewicz et   challenges inherently complex in these areas. However, forest management is also special because it deals with long planning horizons and as uncertainty increases over time (Augistynczik et al 2017). Therefore, analysing uncertainty of forest management has the potential to be a very informative framework to be adopted and applied to other ecological systems.

Modelling versus decision-making studies
We found significant differences in understanding uncertainty among modelling and decision-making studies. These differences pinpoint towards a misalignment of how the different study types address uncertainty, and have the potential to misguide communication of uncertainty when those studies are used as evidence-base to support decisions.
Modelling studies mostly focus on epistemic uncertainty, whereas addressing ambiguity and stochastic uncertainty was less common. This highlights that modellers strive to estimate how much uncertainty about the system they model can be reduced by using more accurate input information, improving model structure (e.g. Cheaib et al 2012), or filling knowledge gaps about ecological processes (e.g Littell et al 2011). Decision-making studies addressed uncertainty across a wider spectrum of natures than modelling studies. This reflects a broader view of the problems that these studies investigate, as opposed to the more targeted and narrower perspective typically adopted by modelling studies. The modelling studies seem to address more process-oriented uncertainties while the decision-making studies deal with more policy-oriented uncertainties. In fact, decision-making studies focused on forests as providers of services like timber and/or recreation, broadening the boundaries of their analysis to incorporate, for example, stakeholder goals and forest policies (e.g. Lawrence and Marzano 2014, Kemp et al 2015. On the contrary, modelling studies investigate individual components of forest structure or functioning, such as biomass ing human needs and views that go beyond forest management practices. Studies focusing on decisionmaking also recognized epistemic uncertainty, e.g. acknowledging the need to obtain better evidence of the most effective adaptive forest management strategy (e.g. Yousefpour et al 2012). However, ambiguity was also well represented. Ambiguity has been identified as one of the key uncertainty dimensions in natural resource management (Brugnach et al 2008). In forest management, ambiguity may emerge when managers are unsure which tree species to plant, even though they have evidence on how trees can grow in the future (e.g. Lawrence and Marzano 2014). The wider acknowledgment of ambiguity in decision-making studies can arise from decision problems being inherently complex, especially when they involve human decisions.
Decision-making studies addressed ambiguity mainly through consultation with stakeholders, which confirmed the broader system boundaries adopted under this perspective (Kemp et al 2015). Conversely, ambiguity was almost lacking in modelling studies, suggesting that modelling is less likely to incorporate multiple views and opinions. However, the recent development of agent-based modelling is trying to bridge this gap (Rounsevell et al 2012, Rammer andSeidl 2015) and modellers are also starting to tackle interdisciplinary questions and problems such as the selection of suitable tree species for maximizing both social and economic benefits. Hence we expect a rising recognition of ambiguity in the modelling world.
Surprisingly, we found little evidence of stochastic uncertainty being covered by either modelling or decision-making studies, even though a number of forest questions related to random elements, such as the exact occurrence and timing of extreme weather events. Yet, probably this inherent stochasticity might be too complex to be dealt with and communicated in modelling and decision-making studies alike, as opposed to epistemic uncertainties.
A second difference is that decision-making studies address preferentially higher levels of uncertainty (i.e. recognised ignorance) if compared to modelling studies, which spread evenly across all three levels. This implies that decision-making studies, while confident about quantifiable (statistical) uncertainty, also acknowledge that a lot is still 'known to be unknown'. Adaptation or mitigation studies are influenced by many aspects and acknowledging that something is unknown (recognised ignorance) should be common. The higher frequency of recognized ignorance in decision-making studies may suggest that scientists dealing with decision-making are aware of the existing evidence about the uncertainty surrounding the impact of climate change on forests, but might struggle to make sense of it (Lemos et al 2012).
In modelling studies, the uniform share of levels indicates that modellers are aware of the existence of multi-layered uncertainties. We found that statistical uncertainty was mostly located in model outputs and parameters, scenario uncertainty in the driving forces, and recognised ignorance within the model parameters ( figure 3). These differences indicate that, depending on the stage of the modelling process, diverse uncertainties emerge and dictate which part of the system needs more attention and the application of more complex calibration techniques (van Oijen 2017).
Finally, in decision-making studies we found clear differences in both the number and the type of addressed uncertainties going from the policy-making to more operational management stages (figure 4). For example, policy-making studies at the national scale have mainly dealt with recognised ignorance (known unknowns), while operational studies at the local scale identified all three uncertainty levels. This suggests that at the national scale decisions are harder to make, as they operate based on known unknowns, while operational staff working at local scale, where mainly 'statistical' uncertainty is addressed, can make more confident decisions.

Methods for uncertainty assessment
A range of methods are available for quantifying and communicating uncertainty in environmental management (Refsgaard et al 2007). We find that modelling studies use more methods to assess uncertainties than decision-making studies, which highlights stronger traditions in quantifying uncertainty in the modelling community. Out of 15 main methods, we found that only three methods-namely sensitivity and scenario analysis, and expert elicitation are common to both modelling and decision-making studies. Yet, given their wide applicability, this is not surprising and indeed these are promising methods for easier and clearer communication of uncertainty related to climate change. Scenario analysis, in particular, has been used to quantify several types of uncertainty. This method is very common in forest-related climate impact studies (Petr et al 2014b, Reyer et al 2014, Ray et al 2015 but also in a wide range of other climate impact studies (e.g. Frieler et al 2017), likely due to the simplicity of scenario development, analysis, and communication. However, as our review shows, less frequently used methods offer opportunities for embracing a wider range of uncertainty types.
Furthermore, the dominance of methods for capturing epistemic uncertainty highlights a lack of methods for assessing ambiguity and stochasticity, or more difficulties in applying them. Among available methods for assessing ambiguity, only expert elicitation (stakeholder involvement) seems to be adequate for taking into consideration multiple views and frames about the problem at hand. With the expected increase of integrated models and interdisciplinary research involving multiple types of uncertainty, either new methods should be developed, or the current ones tested to capture and communicate ambiguity. Otherwise, the modelling community might struggle to find a common language with their model users, and model results will be less likely to be picked-up by users. Finally, we acknowledge that a similar analysis based on papers in a different field, e.g. hydrology, could have yielded a somewhat different set of methods to be used for uncertainty assessment reflecting disciplinary preferences for certain methods.

Recommendations for modelling, policy and management
Modelling and decision-making studies provide diverse but valid knowledge about a system under study (Brugnach et al 2008). Building upon this review, we provide recommendations that might help future modelling and decision-making studies to increase clarity. This clarity will help to formulate key messages and better communicate uncertainty as required for thorough policy making under climate change (Meah 2019).
Modelling studies should aim to increase the usability of model results, while acknowledging different uncertainty types, by: • Continuously improving model accuracy and reducing epistemic uncertainty by, e.g. additional field measurements, incorporation of big data from remote sensing, and novel calibration and data assimilation techniques.
• When possible, providing easily interpretable measures of confidence in statistical models (such as confidence or credible intervals) in combination with the effect size of the response variable.
• Being clear about which types of uncertainty they are addressing or not, and then communicating them properly.
• Being clear about which uncertainty types a model is trying to reduce, but also demonstrating when new uncertainties can possibly emerge (i.e. surprising, new relationship between variables).
• Trying to model or incorporate broader uncertainty natures, especially ambiguity, which are important for decision-making and model users.
As current forest policies increasingly focus on making forests resilient to environmental change (EU 2013, Forestry Policy Team 2013), they inevitably have to deal with a number of uncertainties associated with climate change impacts on forests. To translate these policies into practice and manage for resilient forests, it is important to identify the key uncertainties and reduce them, if possible (Allen et al 2011). For practical forest management, to make future forests more resilient, management plans need to incorporate uncertainties on climate change impacts (Lindner et al 2014), e.g. about future extreme weather events, pest and diseases, which cause the most severe impacts and may strongly affect model output's accuracy (Littell et al 2011). Management plans can include for example a scenario analysis, coming up with strategical and tactical management options for several alternative future climates. Another example would be using stakeholder involvement to collect opinions on the worst-case scenario, and plan accordingly, following an approach consistent with a precautionary principle. For decision-making studies, we therefore provide the following recommendations: • Using available frameworks and methods to capture all investigated uncertainties for easier communication with peers and model users.
• Questioning which types of uncertainties models and their outputs quantify.
• Being open about the range of uncertainties that the problem might involve-especially including ambiguity.
• Being aware of the model boundaries and about what processes or components are 'known unknowns', because model outputs and their inherent uncertainties represent only a part of forest ecosystem dynamics.
• Acknowledging that recognised ignorance (as a specific nature of uncertainty) is a common driver in policy making.
• Acknowledging, assessing and communicating uncertainties (e.g. by scenario analysis) when developing policies for sustainable forest management and adaptation under climate change (advisors).
Overall, uncertainties should not be perceived as a barrier for action, but be acknowledged and communicated with 'simple but not simplistic messages' .

Limitations of the review
During this review, we made a number of assumptions which have to be borne in mind when interpreting the results. First, only a small proportion of the existing literature on climate change impacts on forests was captured by our search criteria. This means that standardized uncertainty reporting is not at all a common practice both in modelling and in decisionmaking studies. Ultimately, most scientific studies address uncertainty, because they bring a novel understanding of something that was previously unknown, but most fail to acknowledge uncertainty in a structured way. Second, for each paper we recorded only the first uncertainty assessment method applied to a unique combination of uncertainty location, level, and nature. As a consequence, we possibly omitted other methods that would have been used for the same unique combination. Still, due to our three-dimensional framework, we believe that we identified the majority of methods. Yet, given that our primary focus was mostly on the uncertainty types, future research on the exact use and applicability of uncertainty assessment methods could shed further light on how to address different uncertainty types. Third, our uncertainty framework, which we developed before the systematic review, is not comprehensive and might be amended by future users. For example, through the review, we came across new uncertainty types, which were missing from the proposed uncertainty framework and were classified as 'not available'. These could be classified by introducing 'deep uncertainty' as another uncertainty level, placed just above 'recognised ignorance' (Kwakkel et al 2010). Fourth, we could not completely avoid publication bias, as well as a subjectivity bias by the different co-authors classifying the papers (Haddaway and Macura 2018). To reduce the latter, we followed a well-structured protocol for reviewing papers, which we discussed and shared during several meetings-a common method when conducting systematic reviews (Haddaway and Macura 2018). Finally, we used a set of uncertainty quantification methods that came from a modelling background and hence heavily focused on modelling studies (Refsgaard et al 2007). Even though we argue that the Refsgaard et al (2007) quantification methods are very comprehensive, they could be expanded to account for other uncertainty quantification methods suitable to the peculiar uncertainty dimensions that must be addressed by this type of research (Ascough et al 2008).

Conclusions
This study presents a multi-dimensional recognition of uncertainty in climate change impacts and adaptation studies in forest science. The modelling and decision-making studies we reviewed both typically address a wide range of uncertainties, but not necessarily the same ones. This mismatch highlights the need for a more transparent and comprehensive treatment and communication of uncertainty in scientific papers given that modelling and decision-making studies together should contribute to provide the evidence basis for solving climate change adaptation problems. Yet, trade-offs between which types of uncertainty to address and investigate will remain, because not all of them can be addressed in one study alone. Therefore, we call for strategies or frameworks that clearly and explicitly identify and communicate uncertainty dimensions. Disregarding the different uncertainty dimensions will likely lead to an imperfect communication of uncertainty, and, after all, to a sub-optimal evidence basis for decision-making.