Simulating burn severity maps at 30 meters in two forested regions in California

Climate change is altering wildfire and vegetation regimes in California’s forested ecosystems. Present day fires are seeing an increase in high burn severity area and high severity patch size. The ability to predict future burn severity patterns could better support policy and land management decisions. Here we demonstrate a methodology to first, statistically estimate individual burn severity classes at 30 meters and second, cluster and smooth high severity patches onto a known landscape. Our goal here was not to exactly replicate observed burn severity maps, but rather to utilize observed maps as one realization of a random process dependent on climate, topography, fire weather, and fuels, to inform creation of additional realizations through our simulation technique. We developed two sets of empirical models with two different vegetation datasets to test if coarse vegetation could accurately model for burn severity. While visual acuity can be used to assess the performance of our simulation process, we also employ the Ripley’s K function to compare spatial point processes at different scales to test if the simulation is capturing an appropriate amount of clustering. We utilize FRAGSTATS to obtain high severity patch metrics to test the contiguity of our high severity simulation. Ripley’s K function helped identify the number of clustering iterations and FRAGSTATS showed how different focal window sizes affected our ability to cluster high severity patches. Improving our ability to simulate burn severity may help advance our understanding of the potential influence of land and fuels management on ecosystem-level response variables that are important for decision-makers. Simulated burn severity maps could support managing habitat and estimating risks of habitat loss, protecting infrastructure and homes, improving future wildfire emissions projections, and better mapping and planning for fuels treatment scenarios.


Introduction
California's forested ecosystems have been and continue to be altered by climate change-enhanced wildfires (Westerling et al 2006, Abatzoglou and Williams 2016, Westerling 2016. Many of California's seasonally dry forests historically burned at higher frequency than present day observations, which promoted heterogenous forest structure and reduced surface fuels and stem density (Stephens et al 2007(Stephens et al , 2020. More than 100 years of fire exclusion has increased tree density and surface fuel loads (Parsons and DeBenedetti 1979, Agee 1993, Steel et al 2015, contributing to more high-severity fire (Lutz et al 2011) and larger high severity patch underlying factors that determine wildfire burn severity-and to facilitate simulating fire severity maps under future scenarios-we extend prior research to estimate probabilities of burn severity classes at 30 m as a function of climate, topography, fire weather, and biomass availability.
Burn severity is determined by fuels (biomass), weather, climate, and topography. Total biomass, the structure of that biomass, and the proportion of dead biomass have been identified as some of the most influential variables for high burn severity at both coarse (3 km) and fine (30 m) scale resolutions, and changing climate is increasing the proportion of dead biomass that is available to burn (Miller et al 2009, Miller and Safford 2012, Picotte et al 2016, Parks and Abatzoglou 2020, Goodwin et al 2021. Canopy cover and normalized difference vegetation index, factors which directly relate to fuel load have also been linked to increases in high severity fires in the Sierras (Koontz et al 2020). Climate and fire weather have been identified as influential variables for high-severity patches (van Mantgem et al 2013, Stevens et al 2017, Keyser and Westerling 2017. Topography, particularly aspect and elevation, influences fire behavior, making it an important predictor of burn severity (Alexander et al 2006, Keyser and. Area burned has been a common metric used to evaluate current fire seasons and a common modeled variable when projecting future climate-fire interactions (Littell et al 2009, Westerling 2016, 2018, Williams et al 2019. Yet, in fire adapted forests, like many in California, burn severity and the impacts of fire on the ecosystem are important for understanding and predicting ecosystem function (Liang et al 2017, Hurteau et al 2019, Sleeter et al 2019.
Many fire models use a process-based approach to simulate the effects of fuels, weather, and topography on fire spread and fire effects from a single ignition point. These types of models are useful for understanding fire spread and identifying areas at risk or quantifying how fire will interact with vegetation and influence subsequent fire or ecosystems process at temporal scales ranging from years to decades and spatial scales ranging from landscapes to the Earth system (Finney 2001, Lasslop et al 2014, Hurteau et al 2019. While process-based simulations are useful for answering a variety of questions and meeting a range of objectives, they require data at high spatial and temporal resolutions that are typically not yet available for regional scale assessments of climate change impacts, and they impose computational costs that are not practical for simulating large numbers of wildfires under a range of future scenarios. The empirically-driven, probabilistic simulation technique described in this manuscript is the process of creating a burn severity map through iteratively clustering high severity patches informed by empirically driven statistical modeling of severity classes. This differs from previous process-based burn severity simulations by being much more computationally efficient, as well as ignoring fire progression and instead capturing high severity patches through empirically modeling the likelihood of high severity patches. Here we present a methodology that couples empirical modeling and a simulation framework that can facilitate mapping burn severity to support projecting fires for future scenarios of climate, land use, and fuels management. Through this simulation process, we aim to capture the stochasticity of burn severity on a landscape while accounting for the contagious properties of high severity patches. Observed burn severity maps are one realization of a random process dependent on climate and weather, topography, and fuels. The methodology described here estimates models of the stochastic properties of fire severity to help simulate additional realizations of this random process. By understanding the contagious properties of high severity patches as a function of distance and wind direction, we can better map severity in all classes on a landscape. This approach of burn severity simulation overcomes the uncertainties of each step in a process-based approach by probabilistically modeling burn severity on the landscape. The methodology presented here simulates burn severity statistically and can be coupled with modeled fire occurence and size (Westerling et al 2011) to project burn severity maps with downscaled climate projections.

Study area
Our study area focused on two forested ecoregions in California, the Sierra Nevada and the Northern Coast (Klamath and Mendocino Forests, figure 1). These two forest regions have somewhat similar assemblages of tree species, but differ in shrub species composition, climate, topography, land-use history, and fire history.

Data collection and processing
Burn severity data were retrieved from the Wildfire Burn Severity and Emissions (WBSE) dataset (Xu et al 2022); 30 meter severity rasters were generated by Xu et al (2022), which converted difference normalized burn ratio Landsat images to composite burn index classes using numerous regression models (Picotte et al 2021, Xu et al 2022. WBSE classes of burn severity are defined as: unburned, grass burn, low, moderate, and high. For this paper we group grass burned pixels with unburned into an 'other 'class (see limitations for more). WBSE data were obtained for all fires from 1984 to 2018 at 30 meters. To efficiently test our model, we randomly sampled roughly 10% of the entire WBSE dataset in our desired region at 30 m, giving us n = 3221 025 pixels in our modeling dataset. The simulation dataset had n = 104 885 324 total pixels. By randomly sampling 10% of pixels in our study site we remove influence of spatial autocorrelation on our modeling technique by reducing the impact that neighboring pixels could have had on model fit.
Climate data were obtained from the Gridded Surface Meteorological dataset (Abatzoglou 2013) and were aggregated and reprojected to monthly variables onto a 30 m grid using nearest neighbor interpolation. Aggregation was either summed or averaged depending on the type of variable; for example, precipitation was summed over the ignition month of the fire event whereas temperature was averaged over the ignition month of the fire event. Water year precipitation variables were also calculated; the previous water year precipitation was the sum of all precipitation from October to September of the previous year, and the current water year was the sum of all precipitation from October through the month of the fire event. Elevation and aspect were accessed from LANDFIRE (LANDFIRE 2016), and other derived topographic variables tested, such as heat load index and topographic position index in the spatialEco package in R (R Core Team 2021).

Vegetation datasets
Two modeled vegetation products were used in this manuscript: the Land Use and Climate Change (LUCAS) (Sleeter et al 2017) and Landscape Disturbance and Succession (LANDIS) (Hurteau et al 2016) products; these vegetation products were selected because they can be simulated with future climate projections. Both products were tested to identify if a difference in spatial resolution could improve model fit and simulation technique. Vegetation layers at 1 km resolution were processed from LUCAS (Sleeter et al 2017). Vegetation layers are a snapshot of tons of aboveground carbon per hectare. Layers consist of both live biomass and dead organic pools. During model construction some layers were aggregated to simplify interactions. Aboveground biomass was aggregated into two different classes: live biomass (merchantable wood and other wood) and dead biomass (snag branch, snag stem, leaf litter, decomposed litter, down branches, and down stems). LUCAS variables (modeled at 1 km) were reprojected using nearest neighbor interpolation to 30 m grids to match our burn severity data. LANDIS vegetation was originally 150 m resolution and was reprojected using nearest neighbor interpolation to match our 30 m grid. LANDIS is a forest landscape model that simulates trees and shrubs within a given cohort (Hurteau et al 2016). LANDIS is only simulated for forested regions, defined where vegetation has a minimum height of 5 m and a 10% canopy cover (Hurteau et al 2016, Riley et al 2019. In areas that were not simulated by LANDIS, we replaced missing data classified as shrub with an average of shrub within that ecosystem, for the Sierras we replaced with an average shrub value from the Sierras and did the same for the North Coast. LANDIS contained shrub, surface duff, surface litter, tree, and deadwood biomass variables. We aggregated shrub, surface duff, surface litter, and tree into a Live variable and shrub and tree into a Live Heavy variable to have comparable variables with LUCAS.

Modeling and simulation framework
To simulate burn severity for individual wildfire events, we first estimated the probabilities of individual burn severity classes within a fire perimeter as a function of climate, vegetation, fire weather, and topography using observed fire severity maps. We developed two sets of empirical models, one fit with LUCAS variables and the other fit with LANDIS to compare effects of differences in the spatial resolution of our biomass data. Next, we produced an image with candidate burn severity classes assigned randomly based on the outputs of the estimated probabilities; this then informed our clustering technique which influences how classes are distributed onto the landscape, using weights derived from distance to the nearest candidate high severity pixel and vector winds. This process is done iteratively (zero to three iterations) to reach the desired amount of high severity clustering, which is dependent on the known fraction of high severity out of total area burned. This iterative smoothing technique facilitates finding a balance between over or under clustering high severity patches. We can simulate an arbitrary number of realizations with similar clustering properties to historically observed severity maps to explore spatial variability in burn severity. Current dynamic fire simulation techniques require very fine temporal and spatial variables and are computationally expensive (Finney 2001, Lasslop et al 2014, Hurteau et al 2019. Our approach introduces a stochastic process into our simulated severity mapping combined with smoothing methods to emulate these fine scale contagious interactions in order to generate fire severity maps that are statistically comparable to observed historical fire severity maps.

Empirical modeling
We estimated probabilities of any given 30 m pixel burning in one of four severity classes: low, moderate, high, and other (grass and unburned). To capture the probability of severity classes we deploy a stepwise conditional modeling approach using generalized additive models (GAMs) in the mgcv package in R (Wood 2017, R Core Team 2021). We utilize GAMs because of their ability to fit non-linear relationships while still maintaining a framework that allows us to analyze how climate and vegetation interact with disturbance. Other methods such as classification and regression trees were considered, however the use of GAMs allows for more interpretability with projected data that may fall outside of historical observations. We first used a binomial regression to model whether pixels for a given fire have burned in any of low, moderate, or high severity classes, but not 'other' , as a function of climate, topography and LUCAS carbon pools (equation (1)) (1) We then use a binomial regression to model the probability of any pixel burned in high or moderate severity but not low severity, given that the original sampled pixel was not classified as other (equation (2)) Finally, we model for any pixel being burned at high severity and not moderate severity, given that the original sampled pixel was not classified as other or low (equation (3)) Same technique as equation (1) (equation (4)) for LANDIS biomass: Same technique as equation (2) (equation (5)) P MH = β0 + f1 (thousand hour dead fuel moisture) + f2 (Elevation) + Live Heavy Biomass + Live Heavy Biomass:Elevation.
Same technique as equation (3) (equation (6)) Equations (1) through (3) were fit with LUCAS biomass and equations (4) through (6) were fit with LANDIS biomass. Terms f are semiparametric smooth functions, and terms g are tensor spline interactions between two or more explanatory variables (Preisler and Westerling 2007, Hastie 1990, Wood 2017. We employ spline interactions to add finer specificity of the coarse biomass variables; by interacting coarse (1000 m) biomass variables with our topographic (30 m) variables we create a proxy for the biogeography observed in our spatial domain. The colon denotes a non-smoothed interaction between two variables. All models were developed parsimoniously to best evaluate the drivers of any given severity class and to prevent overfitting (additional information in supplementary documentation). Variables were selected based on how they impacted a model's Akaike Information Score (AIC) (Akaike 1974(Akaike , 1981 and their relative influence on model performance. Variables were also selected by testing for concurvity. Concurvity can identify collinear relationships in predictor variables but can also identify if variables are smooth curves of other included smoothed variables creating wide confidence intervals (Wood 2017). From the model outputs a set of raster images is created defining pixel classes (low, moderate, high, or other severity) within a fire perimeter based on multinomial probabilities (equation (7)) In the above conditional probabilities, LMH refers to the estimated probabilities from the Low, Moderate, and High severity binomial, MH refers to the estimated probabilities from the Moderate and High severity binomial, and H refers to the estimated probabilities of the High severity binomial model.

Wind-weighted distance metric
To capture the high severity clustering that is seen in wildfires, we developed an exponentially decaying inverse-distance wind-weighted (WindDist) metric. WindDist (Di) accounts for neighboring 30 m pixels burned at high severity as far as ten pixels away in all directions (21 × 21 grid around some jth pixel) and as close as five pixels away in all directions (11 × 11 grid around some jth pixel). Our equation is as follows, y j = is the response variable (1 if pixel j is high severity, 0 otherwise); d ij = distance between pixel j and the ith pixel of interest, θ ij is the angle between the direction of jth pixel and wind direction.
We then refit the nested binomial models and create a new joint probability accounting for neighboring high severity pixels and wind direction.

Simulation
To employ the Di metric in our simulations, we run an initial iteration (Iteration 0) generating a random selection from the joint probability model without Di and create a raster of all pixel predictions within a given fire. From this initial predicted raster, we calculate Di for all the pixels given that we have a set of candidate high severity pixels, and then generate a new raster of predictions that accounts for proximity to high burn severity pixels and wind direction.
The wind-distance variable is thus dependent on the random outcomes of the first pass of models. This is done iteratively at each step with the Di dependent on the high severity candidates of the previous iteration. The simulation had four iterations (iteration 0 to iteration 3) of clustering, where each additional iteration increased clustering of high severity pixels into patches (figure 2).
For every step within each iteration, we generated 0 and 1 binary rasters from our estimated probabilities. Next, we use convolution to achieve smoothing of our images. Here, convolution is the process of applying a weighted smoothing kernel to an image of 0 and 1 pixels and converting it to a new raster of pixels with values that are continuous from 0 to 1, with the goal of clustering high severity pixels. The weighted smoothing kernel is defined as the distance from any given pixel j multiplied by the direction of the mean monthly wind direction at the time of the fire event summed across all neighboring pixels at varying focal windows. We tested three focal windows of 5 × 5, 7 × 7, and 11 × 11 pixels to understand the varying impacts of smoothing on neighboring pixels. The convolution process allowed for the selection of pixels to be non-random, where we set a threshold on the continuous raster of 0-1 pixels that matched a desired fraction in an observed burn severity class. Convolution occurred at each step of the simulation process. To test the tolerance limit for thresholding (i.e. the precision within which the convolution output must match the total predicted severity fractions for the fire map being simulated), we tested for a five-percent, two-percent, and one-percent tolerance when simulating burn severity pixels. Selecting a tolerance affects the precision with which we match the observed severity fractions of the modeled fire, and also affects the computational costs, which becomes more relevant when large numbers of simulations are desired.

Simulation testing
To test the clustering of high severity patches, we used Ripley's reduced second moment function K(r) (K function) for multivariate spatial patterns (Dixon 2002)  The multivariate K(r) function is defined as follows, given the number of type j events within distance t of a chosen type i event: In this application, we are interested in the relative clustering of high burn severity patches, thus both j and i here refer to discrete 30 m high severity pixels, with r the scale of analysis, E the total number of events of all severity classes within distance r, and λ the number per unit area of high severity pixels. The K(r) that is calculated from the observed image is then used as the baseline against iterated simulations to test if we are capturing the same levels of observed high severity clustering. Note that while we focused our analysis on the clustering of high severity burn patches due to their potential impacts on risks to infrastructure, habitat suitability, and post-fire regeneration, more realistic simulation of the high burn severity clustering in our conditional model framework results in more realistic simulation of low and moderate spatial patterns as well. Visual acuity can be used to determine the performance of a K function; however, we supplement this by evaluating K at different r values.
To further evaluate our simulated high severity patches, we calculated high severity patch metrics for the observed burn severity map, along with each iteration of simulation and the varying focal window sizes. We tested for high severity patch size and patch number. The comparison of patch statistics between the observed map and our simulated maps allows us to evaluate two aspects of our simulation technique: testing the size of our focal window and testing the number of iterations of smoothing.

Results
For the empirical burn severity models, we tested for a total of 36 variables (supplementary table 1). The best models included live biomass with varying levels of aggregation. We found that the best models contained thousand-hour fuel moisture, and that additional climate variables did not significantly improve model performance. We utilized topographic explanatory variables to incorporate more spatial specificity into our biomass variables. By incorporating both biomass and topography variables independently and in interactions, we were able to identify the joint and independent influences of each variable on model performance. Although certain variables were not explicitly included into each model, we point out that due to the conditional modeling process, variables selected in earlier model steps have respective influence on each subsequent severity class model. Spatial locations (lat, long) had no influence on model performance, however this may be due in part to our inclusion of coarse vegetation and climate variables which obscured spatial variability at 30 m. We found that the Di metric resulted in improved AIC scores (tables 1 and 2). AIC scores improved in the Northern Coast while using the higher resolution LANDIS compared to LUCAS, however it did not improve in the Sierras.
We established that a two-percent tolerance during the convolution step provided an acceptable compromise between the level of precision in simulated severity fractions and computational costs. To determine the number of convolution iterations used, we visually compared simulated fire severity maps to observed fire severity maps and tested the clustering of the simulated fire severity maps by comparing Ripley's K statistics and FRAGSTATS patch metrics for simulations with observed severity maps. In most scenarios, iteration one or two captured an appropriate amount of clustering, with iteration one coming closest to the observed average level of clustering, albeit with significant variability. To determine the focal window size, we compared patch number and patch size distributions between iterations.
We compared the high severity patch count (tables 3 and 4) and the empirical distribution of high severity patch sizes (figures 2 and 3) between focal window sizes. The focal window size and the iteration number are not independent of each other and selecting an appropriate focal window size may depend on the size of the high severity patch in question. For the rest of the analysis in this manuscript we utilize a focal window of 5 × 5. We chose this focal window size because we found that a focal window of five and the one iteration of smoothing yielded patch counts and size distributions similar to historical observations. Across both sets of empirical models we see that the LANDIS fit models led to more overall clustering of patches as we see a decrease in patch number per simulation iteration. In both cases one iteration of smoothing yielded similar patch numbers with a focal window of five.
While comparing the different vegetation datasets in actual simulations we observed that with either 150 m or 1 km spatial scale, our simulation technique can do a fair job at replicating areas of high severity (figure 4). However, LANDIS has an extreme limitation in this modeling framework as it results in severe underestimates of low severity burns (see limitations). To test for visual acuity and further explore Ripley's K we only employ the LUCAS set of simulations for the rest of this manuscript so that we can evaluate severity in all classes.
We compared point pattern distributions of high severity clustering between our different iterations using Ripley's K-function. This was done for all simulations across both model sets (LUCAS and LANDIS). By evaluating a K-statistic at each step of the iterative process, we identified that iteration one, on average, produced results that were most comparable to Negative change in AIC score with additional variable Di represents improved model fit. Negative change in AIC score with additional variable Di represents improved model fit. High severity patch counts with varying focal window sizes and iterations of clustering technique. A high severity patch is counted as two or more contiguous high severity pixels. The total observed patch number is 148 836. High severity patch counts with varying focal window sizes and iterations of clustering technique. A high severity patch is counted as two or more contiguous high severity pixels. The total observed patch number is 148 836.   observed burn severity maps (figure 5). We evaluated the distance r of K(r) at four different distances (km); 3, 6, 12, and 21 (figure 5)-this allows for larger comparisons across our simulated maps. Distance of r = 3 km captured clustering of high severity in most fires >404 hectares, and r = 12 km was important in capturing high severity patches in mega-fires >40 468 hectares, which tend to occur under extreme conditions. Figure 5 shows that across all four tested r values, we observe that iteration 1 was, on average, comparable to historical fire severity maps across the full simulated sample. In figures 6 through 8, we show examples of simulated burn severity maps. Through visual acuity we observe Iteration 1 and Iteration 2 represent similar maps to observed maps. Our goal was not to replicate the observed maps but create statistically similar maps. Ripley's K-function plots for each sample fire show that we can capture a desired amount of clustering through one or two iterations of convolution, and that one iteration of convolution comes closest on average to the observed historical clustering of high severity patches (figures 6-8).
The Di metric was introduced into the empirical models after iteration 0. We observe increased clustering performance beginning in iteration 1 (tables 3, 4 and figures 5-8). By utilizing the average wind direction of the fire ignition month, we use the Di metric as a proxy for some fine scale interactions that occur in the spread of high severity fire, such as firebrand showers and subsequent spotting without the use of fire progression maps, or a process-based simulation technique.

Discussion
We developed a methodology to simulate burn severity maps through empirically driven, probabilistic modeling of burn severity and clustering of high severity patches. Our empirical model agrees with current literature that biomass is the most important predictor for burn severity (Miller et al 2009, Miller and Safford 2012, Picotte et al 2016, Parks and Abatzoglou 2020, Goodwin et al 2021, Stephens et al 2022. Our results also agree with previous literature that climate and fire weather are influential variables for modeling high severity patches (van Mantgem et al 2013, Keyser and LeRoy Westerling 2017, Stevens et al 2017, Keyser and Westerling 2019. In this manuscript we used topography as interactions to add specificity to coarser variables, however research has found it to also be an important predictor for burn severity at all classes (Alexander et al 2006, Keyser and. Implementation of the Di metric aided in clustering high severity patches in large fires, which is an area of high interest. Unlike past process-based fire simulations (Finney 2001, Lasslop et al 2014, Hurteau et al 2019, we developed a more computationally efficient simulation technique built upon empirical modeling of burn severity. By accounting for the underlying contagious properties of high severity fire, we simulate contiguous high severity patches that are comparable to observed burn severity maps.

Limitations
We grouped grass pixels that burned with unburned pixels because grass fires have little impact on emissions modeling and, within the domain of our study area, we had relatively little grass fire area, which limited our ability to model this type in a robust manner. LANDIS vegetation was limited to areas with greater than 5 m of height and 10% canopy cover. For this reason low severity patterns were severely underestimated. Future work with the LANDIS dataset will be limited to high and moderate severity simulations unless a proxy or substitute for grass can be implemented.
Although fire refugia (Krawchuk et al 2016) were not explicitly modeled for in this study; we note that the underlying empirical models should capture areas with higher probability of fire refugia. Using a Monte Carlo simulation approach with the methodology presented in this manuscript could identify a distribution of fire refugia areas and test to see if they are stochastic or they continuously show up in each simulated map.

Future work
To best utilize this statistical modeling and simulation framework, future work could look to create a heuristic approach that can systematically adjust parameters within the framework such as tolerance, focal window, and number of iterations. Ideally this heuristic approach could rely on climate forcing such as aridity and dominant wind direction and speed, biomass availability, and higher temporal resolution to better capture fine-scale interactions during the time of the fire.

Conclusion
Simulating future scenarios of wildfire burn severity could inform land use and fuels management. We simulate wildfire burn severity by expanding on prior research investigating drivers of wildfire burn severity to estimate 30 m severity probability maps using conditional binomial GAMs and clustering areas of high severity through iterative convolutions. Our results identified biomass; climate variables such as vapor pressure deficit, dead fuel moisture, and relative humidity; and topographic variables such as heat load index and elevation as influential in predicting historically observed burn severity classes at 30 meters. The conditional modeling process allowed us to investigate drivers of all severity classes and to jointly estimate their probabilities. By stochastically simulating wildfire burn severity, we were able to produce computationally efficient and realistic burn severity maps that are comparable to historically observed burn severity maps.
Differences between 1 km LUCAS vegetation and 150 m LANDIS vegetation had little effect on the overall simulated high severity patches, but because the spatial extent of LANDIS covers only forested regions, our models using LANDIS biomass severely underestimated low severity burns. A vegetation product that was mapped at 30 m may improve model results, however we believe that by interacting the coarser vegetation with 30 m topography we capture more specificity in the spatial patterns of the vegetation.
We focused on forest fires for this study, which represent a substantial proportion of the area burned in California. We are exploring the feasibility of creating a single statewide model for simulating fire severity maps; this could allow us to better parameterize our severity modeling for vegetation types that are scarce in the current study region. Determining the number of iterations of convolutions to use could be related to climatic conditions using a heuristic approach to iteration selection, by exploring whether outliers in the fire record with extremely high clustering of high severity can be predicted by available climate, weather, fuels, and topographic data. Improved simulations may also capture differences between plume-and wind-dominated fires. Simulated burn severity maps may aid in habitat management and estimating the risk of habitat loss, improve future wildfire emissions projections, and help support planning for fuels treatment scenarios, among other applications. The simulation methodology described here may be used to extend the framework developed for application in California's 5th Climate Assessment for statistically simulating wildfires with future climate, development and fuels management scenarios.

Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.