Pyrolysis Kinetics and Multi-Objective Inverse Modelling of Cellulose at the Microscale

The chemistry of pyrolysis, together with heat transfer, drives ignition and flame spread of biomass materials under many fire conditions, but it is poorly understood. Cellulose is the main component of biomass and is often taken as its surrogate. Its chemistry of pyrolysis is simpler and dominates the pyrolysis of biomass. Many reaction schemes with corresponding kinetic parameters can be found in the literature for the pyrolysis of cellulose, but their appropriateness for fire is unknown. This study investigated inverse modelling and the blind prediction of six reaction schemes of different complexities for isothermal and non-isothermal thermogravimetric experiments. We used multi-objective optimisation to simultaneously and separately inverse model the kinetic parameters of each reaction scheme to several experiments. Afterwards we tested these parameters with blind predictions. For the first time, we reveal a set of equally good solutions for the modelling of pyrolysis chemistry of different experiments. This set of solutions is called a Pareto front, and represents a trade-off between predictions of different experiments. It stems from the uncertainty in the experiments and in the modelling. Parameters derived from non-isothermal experiments compared well with the literature, and performed well in blind predictions of both isothermal and non-isothermal experiments. Complexity beyond the Broido-Shafizadeh scheme with seven parameters proved to be unnecessary to predict the mass loss of cellulose; hence, simple reaction schemes are most appropriate for macroscale fire models. Our results show that modellers should use simple reaction schemes to model pyrolysis in macroscale fire models.


INTRODUCTION
Biomass provides the fuel for forest and urban fires. Whether in a forest in the form of plants, or in buildings in the form of engineered wood, like Cross Laminated Timber, biomass is made out of three main components: cellulose, hemicellulose, and lignin [1]. During a fire, biomass undergoes pyrolysis, a thermochemical process in which it decomposes to char, condensable liquids (tar), and gases [1]. Cellulose is often taken as a surrogate for the pyrolysis of biomass. It is the main component of biomass and dominates the chemistry of pyrolysis in reactors and fires [2], [3]. In a timber building the chemistry of pyrolysis controls roughly the first 50 mins of the fire behaviour (charring) of timber [4]; Suuberg and Milosavljevict even recommended to use cellulose as a standardised material to study the behaviour of wood in fire [5]. To study cellulose is therefore a natural first step in studying the chemistry of pyrolysis of wood. Despite 60 years of research, the chemistry of pyrolysis for cellulose is still under scientific debate [3], with reaction schemes ranging from one step (one reaction, three parameters) [6] to mechanistic (>300 reactions,>600 parameters) [7]. There is no consensus on the optimal kinetic model, with different authors favouring different complexities of kinetic models [7]- [10].
The chemistry of pyrolysis is traditionally studied on the microscale using thermogravimetric analysis (TGA). In TGA, a small sample (usually a few mg) is inserted in a furnace and subjected to a temperaturetime program, either isothermal or non-isothermal. In both cases, the temperature is measured close to the sample [11], and the residual mass and mass loss rate of the sample are recorded with respect to time.
Mostly fragmented studies on the decomposition chemistry of cellulose are available in the literature, where any particular scheme was calibrated to either isothermal or non-isothermal experiments. We use the word calibrated here to indicate that a validation study-prediction of data outside of those used for calibration-is usually missing. Antal [6], [16], [17] found that for non-isothermal experiments of microscale ash free samples of cellulose the reaction of cellulose can be represented by a single step. This scheme, however, cannot explain the variation of the char yield with heating rate [18]. For this variation at least two competing reactions are required [2]. Broido and co-workers first proposed the mechanism of two competing reaction pathways, one to char and one to tar, with an intermediate [19] and then without an intermediate [20]. Bradbury et al. [21] added an activation reaction to Broido's second scheme, without an intermediate, while Agrawal [22] modified Broido's first scheme, with an intermediate, by separating the reaction of char and gas formation into two reactions. None of the above have been validated for both isothermal and non-isothermal conditions with the same set of kinetic parameters. Only Capart et al. [9], and Corbetta et al. [23] were able to predict isothermal and non-isothermal experiments of cellulose with the same set of kinetic parameters using more than 9 parameters to calibrate the model.
The question remains if it is appropriated to predict both isothermal and non-isothermal experiments using reaction schemes with lower complexity then Capart et al. [9] and Corbetta et al. [23]. We will investigate this question by inverse modelling isothermal and non-isothermal experiments using multi-objective optimisation.

Mathematical Model of Microscale Kinetics
We used the mathematical model of themogravimetric analysis (TGA) by Rein et al. [12], which is written in generalized form in Eq. 1 to Eq. 5. We assumed a sample of small length scale compared to the heat and mass transport length scale, and a high carrier gas flow compared to the release rate of pyrolyzates. We also assumed that all reactions are irreversible and that the heat of pyrolysis is negligible. In other words, we assume that the measured temperature and residual mass are those of the sample and all changes to the mass are caused solely by decomposition.
Adopting the nomenclature of [24], Eq. 1 describing the mass evolution of the kth solid species is: Where ν f kj is the formation or yield coefficient of the kth species from the jth reaction; ν C kj is the consumption coefficient of the kth species by the jth reaction; and j   is the reaction rate of jth reaction. If the kth species is not formed and consumed by the jth reaction, ν f kj and ν C kj are both zero respectively. The rate of jth reaction is given by Eq 2, as follows: Where k 0 j is the rate constant of the jth reaction with Aj as the pre-exponential factor, R as the universal gas constant and Ej as the activation energy; T is the temperature of the sample; mk is the normalised mass of the kth species with respect to the initial sample mass; nkj is the reaction order of the kth species in the jth reaction; 2 O y is the ambient oxygen fraction; and δj is the reaction order of the oxygen fraction in the jth reaction. The reaction order nkj is zero if the kth species is not a reactant in the jth reaction. The reaction order δj of the oxygen fraction is zero if oxygen is not a reactant in the jth reaction (pyrolysis reaction). All reactions in this paper are pyrolysis reactions; no oxidation is considered.
The residual mass (m) and the total mass loss rate ( m  ), given by convention as a positive value, are calculated as in Eq. 3 and Eq 4.
are the mass and mass loss rate of the kth solid phase species at time t. The above set of ordinary differential equations (Eq. 1 and 2) is solved numerically with a stiff solver using the initial and boundary conditions given in Eq. 5.
 is the Kronecker delta, β is the linear heating rate, and Td is the desired temperature of the isothermal furnace. For non-isothermal experiments, Td is infinity. Throughout the paper we assume that all reaction rates are first order, as all reaction schemes in this study were proposed as first order.

Reaction schemes
In this study, we will test six different reaction schemes for the pyrolysis of cellulose, shown in Fig. 1 and described in this section. Historically, Broido and Weinstein were the first to reveal two competing pathways in cellulose pyrolysis [19]. One to tar, a condensable liquid, and one to char and gases (pyrolyzates). They hypothesised the existence of an intermediate species by observing an increase in the char yield after prolonged preheating at 230 °C [20]. This conclusion was later confirmed by Varhegyi et al. [16], who also proposed similar schemes to the Broido-Kiltzer (BK) scheme. We chose the Broido-Kiltzer scheme (3 reactions, 7 parameters) to represent this class of schemes of two competing reaction pathways with one or more intermediates in one pathway, as it is the simplest. In 1975, Broido and Nelson proposed that the intermediate could be eliminated to give the simplest plausible reaction scheme of cellulose pyrolysis [20], called the Broido-Nelson (BN) scheme (2 reactions, 5 parameters) in Fig. 1. Fig. 1 Overview of the different reaction schemes of cellulose from the literature studied in this paper. The Antal scheme is from [17], the Broido-Nelson scheme is from [20], the Agrawal scheme is from [22], the Broido-Kiltzer scheme is from [19] , the Broido-Shafizadeh scheme is from [21]and the Ranzi scheme is from [25].
In 1979, Bradbury et al. [21] studied the isothermal behaviour of cellulose between 259 and 407 °C. They supported the Broido-Nelson scheme, but also observed an initiation period in the experiments. They concluded that an initiation reaction to active cellulose precedes the two competing reactions. This scheme is known as the classical Broido-Shafizadeh (BS) model (3 reactions, 7 parameters) [16].
Agrawal [22] re-examined the Broido-Kiltzer scheme in 1989, and suggested that the char and gas reaction R3 should be separate, as there is no constant char to gas ratio for all temperatures. Mamleev et al. [26] proposed an equivalent model, excluding their secondary reactions, in 2007. They argued that their model is consistent with all experimental observations, and we have included this model as the Agrawal scheme (4 reactions, 8 parameters).
In the 1990s Antal et al. argued that only one reaction (1 reaction, 3 parameters) is needed to model the pyrolysis of pure cellulose in non-isothermal conditions [17]. Here we modified Antal's reaction scheme, by explicitly incorporating a char yield, to make it compatible with our mathematical formulation. In fire science the Antal scheme represents the simple one-step reaction which is heavily used, even though, for bushfires, it fails to explain the fire behaviour in the perimeter of the fire, according to Sullivan and Ball [2].
The most complex reaction scheme tested here is the Ranzi scheme (4 reactions, 10 parameters) from 2008. Ranzi [25] used this scheme as part of a larger detailed reaction scheme of biomass pyrolysis, and it is the only scheme tested that was calibrated against isothermal and non-isothermal experiments [23]. No reaction scheme with more than 10 parameters was tested in this study, because Ranzi showed that 10 parameters are sufficient to model both isothermal and non-isothermal experiments with the same set of kinetic parameters [23].
Each scheme investigated, was the state-of-the-art at its time, and represents a different pathway to the formation of char. We judged them to give a good overview of all the current kinetic schemes of cellulose feasible for being incorporated into macroscale fire or reactor models. It has been argued that char is mainly a product of secondary reactions [18]. We have restricted ourselves to reaction schemes which produce char from condense phase species, as these schemes are easily incorporated into macroscale models.

Experimental Data
We used inverse modelling on six different reaction schemes using four sets of experiments: two nonisothermal (Lin et al. [3], Varhegyi et al. [27]) and two isothermal (Cho et al. [28], Varhegyi et al. [16]). We predicted two experiments blind: one isothermal (Chen and Kuo [29]) and one non-isothermal (Antal et al. [17]). The experiments were chosen as they were judged to be free of transport limitations. We assumed negligible transport limitation if thermal lag and thermal gradients are negligible according to the following criteria: For non-isothermal experiments in nitrogen the initial conditions need to satisfy K/min -mg 20 0  m  [7]. Helium has a higher thermal conductivity, which leads to faster heat transfer.
Therefore we only require an initial mass below 1 mg in experiments with helium [30]. In isothermal TGA, we assume that thermal lag is negligible if the non-isothermal criteria are satisfied or if the time of heating the sample to the desired value Td, heating period, is small compared to the time of the experiment.
Thermal gradients were judged to be insignificant if either the Biot number (Bi) is below 0.1 or several initial sample masses have been tested to show that heat transfer limitations are negligible. Following the analysis by Hayhurst [31] with the thermal conductivity from [11] and [32], we obtain Biot numbers of 0.04, 0.06, and 0.37 for argon, nitrogen, and helium environments respectively. These values mean that in nitrogen and argon thermal lag is the limiting process (Bi<0.1), and only in helium thermal gradients could be important (Bi>0.1). Only Lin et al. and Cho et al. used helium with a maximum sample size of 30 mg, but both tested several initial sample masses to show that their results are unaffected by the initial mass.
To minimise possible discrepancies in the experimental data in the validation, we prefer experiments by the same authors. We chose experiments by Antal and Varhegyi [17] for the validation in the non-isothermal case. In the isothermal case, we chose experiments used previously by Corbetta et al. [23] to validate their reaction scheme for cellulose in isothermal and non-isothermal conditions.

Inverse modelling
Inverse modelling is the problem of estimating the input parameters of a model that predicts an experiment from the experimental data [12]. The input parameters are derived by calibrating or optimising the parameters to the experimental data, which means finding the parameters that produce the best fit. Inverse problems in fire science are ill-posed meaning there is no guarantee of a unique solution [12].
For the optimisation we used the Multialogrithm Genetically Adaptive Multiobjective Method, or AMALGAM, developed by Vrugt and Robinson [33]. Evolutionary algorithms produce an initial population and then compute the fitness or cost of each member. Fitness is a value to be maximised, while cost is a value to be minimised. The algorithm then uses the fitness and parameter values of each member in the population to recombine and modify members of the population. Through this recombination the code explores similarities of good solution [33], which leads to improved solutions. The number of members in a population is called population size. The updated population after each recombination step is called generation. Usually the evolutionary process of recombination is restricted to one specific way like in a Genetic Algorithm [12] or in a Complex Shuffled Algorithm [34]. However, AMALGAM utilises four different algorithms to find the optimal solution, namely Genetic Algorithm, Particle Swarm Optimisation, Adaptive Metropolis Search and Differential Evolution. It outperforms, both in accuracy and speed other multiobjective optimisation codes such as the non-dominated sorted genetic algorithm II (NSGA-II) [33] or brute force. The code is, therefore, able to quickly and accurately solve multi-objective (more than one fitness function) problems.
In this work, we used two objective functions expressed in terms of cost: one for isothermal experiments and one for non-isothermal experiments. The isothermal objective function is denoted by the superscript I and the non-isothermal objective function is denoted by the superscript NI. We used AMALGAM to simultaneously optimise the kinetic parameters of each reaction scheme to both objective functions. Each member in the population, one set of kinetic parameters, has then two objective function values: one for the isothermal and one for the non-isothermal objective function. This is the first time multi-objective optimisation is used in fire science.
The objective function for non-isothermal experiments is a slight modification of Huang and Rein's objective function [35]. We simplified the expression by integrating all scaling factors into one parameter γ, based on the hypothesis that the kinetic parameters are insensitive to the details of the objective function. As we optimise to different experiments we normalised with respect to the number of experimental points in each data set. The final form of the objective function is Eq. 6 for the non-isothermal experiments of one data set: Where index i refers to the different experimental points, index j refers to the different heating rates and index k refers to the experimental point within the set. Np is the total number of experimental points at heating rate j; γ is the weight factors between mass and mass loss rate, set to 0.5 for all cases, to give each experimental curve equal weighting; m is the normalised mass with respect to the initial sample mass; and m  is the mass loss rate of the normalised mass. The superscripts e and p denote experimental and predicted values respectively.
For the isothermal case, the objective function is only the mass term of Eq. 6.


The mass loss rate term in Eq. 6 was eliminated in Eq. 7, as the authors of the isothermal experiments did not report the mass loss rate. The combined objective functions are then: (8) where wi is the weight factor of each data set i, set to one if the data set is included in the analysis, and to the zero if the data set is excluded from the analysis.

Kinetic parameters derived from non-isothermal conditions
As discussed in the introduction, non-isothermal experiments are more important for wood and timber in a fire. Therefore, we give priority to the results with lowest ϕNI. The Broido-Shafizadeh scheme, shown in Fig. 2 with the parameters in Table 1, predicts the non-isothermal experiments best. The model shows excellent agreement with the mass loss curves, especially in predicting the final residual mass (Fig. 2 left).
For Varhegyi et al. at all heating rates and Lin et al. at 15 K/min the predictions of the peak mass loss rate are within 10% of the experimental value, which is close to the experimental error of around 8% estimated by us. The experimental error has been estimated by propagating the standard deviation in the final mass and temperature from [6] through to the peak mass loss rate assuming the errors are random. For the other two cases (Lin et al. at 1 and 150 K/min) the kinetic model fails to predict the peak of the mass loss rate by more than 20%. Comparing the fit at Varhegyi et al. at 2 K/min and Lin et al. at 1 K/min, we can attribute the discrepancy between model and experiments to two experimental reasons: initial mass and different cellulose. The effect of both on TGA measurements has previously been shown by Antal and co-workers [6], [17]. However, as the model and Lin et al. agree well at 15 K/min, we judge the larger discrepancy at 1 and 150 K/min to be acceptable. As all other predictions are within the experimental errors, we judged the overall fit of the model to be good.
The agreement with the isothermal experiments is good qualitatively and quantitatively at some temperatures, see This discrepancy between the model and experiments could be due the different cellulose used [17], the different apparatus used [6], or the different initial mass of the samples [6]. These factors are currently not modelled in the literature. As isothermal experiments are of lower interest than nonisothermal experiments, this discrepancy is of small concern to fire science. Fig. 2 Comparison of predictions with measurements for non-isothermal conditions. Each row presents the non-isothermal results from one set of experiments with the residual mass (mass loss) on the left and the mass loss rate normalized to the respective experimental peak mass loss rate on the right. The predictions are for the Broido-Shafizadeh scheme with the parameters in Table 1.  Using the optimal kinetic parameters for each reaction scheme (Table 1), we moved to predict blindly the experiments by Antal et al. [17] and Chen and Kuo [29] to validate the kinetic parameters, as shown in Fig.  4. Blind predictions are made on experiments which were excluded from the optimisation, so that there is no coupling between parameters and experiments. They represent a good test of the optimised parameters. The models show excellent agreement with the non-isothermal experiments but only reasonable agreement with the isothermal experiments. The agreement is of the same level as with the experiments optimised. These results suggest that the set of kinetic parameters is good, and that there might be an inherent trade-off between isothermal and non-isothermal experiment, a question that we will examine in more detail in the next sections.
The three activation energies of reaction R2 (Broido-Shafizadeh scheme, Fig. 1 [7]. Agrawal et al. [36] found the activation energy of the precursor to levoglucosan, main component of tar, to be 171.5 kJ/mol which is near our value (186 kJ/mol) of reaction R1 (Fig. 1). Inverse modelling, therefore, derived values close to the theoretical values. The same applies to the parameters of the Antal scheme (Table 1) which lie perfectly within the theoretical limits for the scheme (log A=13-14 log 1/s, E=188-201 kJ/mol [36]). However, the above comparisons are only an indication of the model quality as an activation energy on its own is meaningless due to the compensation effect between the activation energy and the pre-exponential factor of a reaction [22].  Table 1. The first row presents the non-isothermal results with the residual mass (mass loss) on the left and the mass loss rate normalized to the respective experimental peak mass loss rate on the right. Table 2 shows the cost value (Eq. 8) of different kinetic parameters found in the literature and in this study. They all predict the non-isothermal experiments well, except for Ranzi and Agrawal. The Broido-Shafizadeh scheme and the Antal scheme (Fig. 1) perform best with the parameters by Cho et al. and Lin et al. respectively. This is unsurprising, as these kinetic parameters were derived from a subset of the experiments used in this work-the experiments of the respective authors themselves. Remarkably the classical Broido-Shafizadeh scheme with Bradbury's parameters comes in third place. It shows good agreement, nearly within the experimental error, with the non-isothermal experiments both in terms of residual mass, mass loss rate and final yield. It also predicts the isothermal experiments of Varhegyi et al. 1994 within the experimental errors, but not Cho et al. This agreement indicates that the parameters by Bradbury still provide good qualitative predictions, even under heating conditions outside of those used for their derivation. In contrast, the optimised parameters for each scheme in this study performed similarly in both isothermal and non-isothermal experiments, with the Ranzi scheme performing best overall. This small difference between the schemes indicates that simpler schemes are more appropriate than complex schemes for fire science, as there is little justification for complexity.
In short, we have derived a set of kinetic parameters for each kinetic scheme which predicts very well the non-isothermal experiments (6 mass loss and 6 mass loss rate curves) and reasonably the isothermal experiments (11 mass loss curves). It performs excellently in the blind prediction of non-isothermal experiments (3 mass loss and 3 mass loss rate curves) and well in isothermal experiments (5 mass loss curves); the experiments employed three different types of cellulose. Further, the kinetic parameters compare well with literature values both from other models and theoretical calculations.

Inverse modelling of isothermal experiments
The isothermal behaviour of small particles of biomass is of interest for torrefaction and in chemical reactors [15], [31]. Therefore, we derived the kinetic parameters for each reaction scheme which best fit the isothermal experiments alone. The fit with the isothermal data is slightly better compared to the kinetic parameters derived from the non-isothermal experiments, as seen by comparing Table 2 and Table 3. However, the parameters derived from isothermal experiments perform reasonably to poorly in predicting the non-isothermal experiments (see Table 3). They reproduce the shape of the mass loss curves, but disagree quantitatively. Hence, isothermal and non-isothermal experiments cannot be easily reproduced by optimising to only one of the two. We test this hypothesis in the next section using multi-objective optimisation. Multi-objective optimisation enables simultaneous optimisation to several objective functions. We performed multi-objective optimisation between the non-isothermal experiments (Eq. 6 first objective function) and the isothermal experiments (Eq. 7 second objective function). The key result, shown in Fig. 5, is the appearance of a Pareto front for each reaction scheme between the two data sets. Each point on a Pareto front represents one set of kinetic parameters yielding a cost ϕI for the isothermal experiments and a cost ϕNI for the non-isothermal experiments. A Pareto front represents a set of optimal solutions, where each solution cannot be further improved with respect to one objective, without worsening the solution with respect to another objective [33]. For the determination of kinetic parameters, it means that the optimal solution for one set of experiments is not necessarily the optimal solution for another set of experiments. Therefore, the optimal set of kinetic parameters for the isothermal, non-isothermal, and blind prediction experiments are all different. There might be no set of true kinetic parameters for current state-of-the-art reaction schemes and experiments. As found before and seen in Fig. 5, all reaction schemes performed well for the non-isothermal experiments, which implies that once optimised the choice of reaction scheme for non-isothermal experiments does not affect the accuracy. However, for the isothermal experiment more complex schemes with at least 7 parameters are required to observe an improvement over the Antal scheme.
The optimal solution, optimal set of kinetic parameters, from a Pareto front is the choice of the modeller. We give equal weight to all objectives to compare the different schemes, as is done in single-objective optimisation in fire science [37]. This global minimum decreases, generally, as the number of parameters increases (Fig. 6). The lowest value is achieved by the Broido-Shafizadeh scheme with 7 parameters rather than the most complex scheme, the Ranzi scheme, with 10 parameters. The Broido-Shafizadeh scheme represents, therefore, the best choice for modelling cellulose in fire science. In theory, the Ranzi scheme can be reduced to any of the other tested schemes mathematically. For example, setting the pre-exponential factor of reaction R4 to zero would give the Broido-Shafizadeh scheme; the Ranzi scheme should be at least as good as the Broido-Shafizadeh scheme at predicting the experiments. The incorrect search bounds prohibit the code from finding this solution. Our bounds are wide (see section Inverse modelling), but insufficient for the Ranzi scheme. The increase in complexity from the Broido-Shafizadeh scheme (7 parameters) to the Ranzi scheme (10 parameters) yields an increase in error due to an increase in parameter uncertainty (incorrect search bounds). The same was observed for macroscale models of non-charring materials by Bal and Rein [38].
Experimental errors could contribute to the presence of the Pareto front. To test if experimental errors are an important source of conflict, we plotted the rate constant for the formation of tar for each reaction scheme-the dominating reaction for the mass loss of cellulose [26]. The uncertainty around the lines result from assuming a 2 K temperature error in TGA [39] and the experimental error calculated from the kinetic parameters of Gronli et al. [6]. For the Broido-Kiltzer scheme, see Fig. 7, all rate constants are within the uncertainty bounds of each other, which suggests that experimental errors are particularly important for this scheme. For the Broido-Shafizadeh scheme, the best performing scheme, the rates of R2 are outside the error margins, suggesting other sources of conflict are important. Currently, we could only hypothesise that the difference between predictions of different experiments stems from a difference in the fundamental chemistry or from a difference in heat and mass transfer limitations. Independent of the reason, the Pareto front shows the uncertainty of current kinetic models and experiments on the microscale which can only be captured by incorporating several experiments into the analysis.
We identified three main limitations in our study. Firstly, we assumed that different cellulose behave similarly and can be modelled with one reaction scheme and one set of kinetic parameters. Antal, Varhegyi and Jakab [17] showed that different cellulose types behave differently, so that this assumption could be one cause for the Pareto front. Secondly, we restricted ourselves to first order reaction models, but several authors suggest other reaction types for cellulose [7], [26]. Future studies should re-examine the effect of changing the reaction model or the reaction order on the Pareto front. Thirdly, we only investigated heat transfer limitations, however mass transfer limitation can be significant in TGA [40]. Mass transfer limitations cause prolonged resistance times for the products of primary pyrolysis, which then could further react to form additional char [40]. Despite these limitations, all of the tested reaction schemes performed reasonably well across all experiments, indicating that simple schemes are of appropriate complexity for fire science. Higher complexity, like the Ranzi scheme, is only appropriate for other applications-like in industrial reactors or combustion devise-to predict a large set of species in both the solid and gas phase.   Despite these uncertainties, the kinetic parameters derived from non-isothermal experiments alone blindly predict both isothermal and non-isothermal experiments successfully. This success makes the kinetic parameters applicable to fire science, where non-isothermal heating is most important. The best fit was achieved with the Broido-Shafizadeh scheme. Complexity (number of parameters) beyond the Broido-Shafizadeh scheme yielded a decrease in fit, due to an increase in input parameter uncertainty. All tested reaction schemes, achieved at least reasonable agreement with all experiments, which makes simple reaction schemes most appropriate for macroscale fire models. Our results reveal that the Broido-Shafizadeh scheme is best suited for the modelling of cellulose pyrolysis in fire safety, while higher complexity is only appropriate in other applications.
Our results show that modellers should use simple reaction schemes to model the charring of timber.