Probability of committed warming exceeding 1.5 ∘C and 2.0 ∘C Paris targets

The feasibility of achieving the Paris 1.5 ∘C target continues to be a complex and hotly debated question. To help resolve this question we calculate probability distributions of the committed warming that would ensue if all anthropogenic emissions were stopped immediately, or at successive future times. We use a simple Earth system model together with a Bayesian approach that incorporates multiple lines of evidence and accounts for known model biases. This analysis reveals a wide range of possible outcomes, including no further warming, but also a 15% chance of overshooting the 1.5 ∘C target, and 1%–2% chance for 2 ∘C, even if all emissions had stopped in 2020. If emissions merely stabilize in 2020 and stop in 2040, these probabilities increase to 90% and 17%. The uncertainty arises mainly from that of present forcing by aerosols. Rather than there being a fixed date by which emissions must stop, the probability of reaching either target—which is already below 100%—gradually diminishes with delays in eliminating emissions, by 3%–4% per year for 1.5 ∘C.


Introduction
Under the Paris Accord the nations of the world have agreed to limit the increase in global-mean temperature to well below 2 • C and preferably to 1.5 • C above preindustrial. As global temperature has already increased by 1.1 • C, debate has ensued about the feasibility of the proposed targets, with recent reports declaring the stricter target to be feasible  while others argue that it is not (Australian Academy of Science 2021), invoking for example new revisions of likely climate sensitivity (Sherwood et al 2020). Feasibility arguments depend not only on how the system responds to anthropogenic forcings, but also on the emissions pathways that would be considered 'feasible' (physically, economically or politically); possible future changes in natural sources or sinks of carbon; and the risk level that is considered acceptable, given the uncertainties. Very low emissions scenarios considered by the Intergovernmental Panel on Climate Change (IPCC 2021) do not provide a simple and clear baseline since they rely on complex assumptions about future emissions including atmospheric CO 2 removal through negative emissions (Carton et al 2020, Zickfeld et al 2021.
These questions have sometimes been examined via the concept of 'committed warming:' loosely, the future warming due to past human activity that is already locked in. The original concept considered the additional warming after atmospheric greenhouse gas concentrations (hence radiative forcing) were stabilized at current values; this leads to additional warming that is so far unrealized due to thermal inertia of the system. Maintaining constant concentrations implies some continued anthropogenic emissions (e.g. Friedlingstein et al 2020) to balance ongoing natural removal processes. An alternative is the hypothetical scenario in which all anthropogenic emissions are abruptly set to zero (Collins et al 2013, MacDougall et al 2020. This 'emissions commitment scenario' is a simply defined upper limit on what could physically be achieved by emissions reduction alone, setting aside political or economic feasibility, and geoengineering. We focus here on this scenario, comparing present and future commitment dates to provide a novel and intuitive measure of the consequences of delay. In current coupled climate models this zero emissions commitment leads to little if any further warming from prior emissions Caldeira 2008, MacDougall et al 2020). Although there is still unrealised warming associated with the thermal inertia of the deep oceans, the rapid reduction of radiative forcing due to natural drawdown of CO 2 leads to cooling of the surface oceans, and in the net these two responses are thought to approximately cancel (Solomon et al 2009). Thus there would be little committed surface warming, and hence, at least in principle, sufficiently strong policies could achieve a 1.5 • C target. There are however at least three issues complicating this 'cumulative emissions' reasoning.
First, cessation of GHG-generating activities would also lead to a dramatic reduction in anthropogenic aerosol, hence a rapid increase in global temperature (Matthews and Zickfeld 2012) since the cooling effect of the aerosol would be removed on a time scale of less than a month (Hansen and Lacis 1990). This effect has been neglected in some 'commitment' studies (Mauritsen and Pincus 2017). Its magnitude is highly uncertain, especially because the uncertainties in climate sensitivity and aerosol forcing interact, with a stronger present-day aerosol forcing implying (all other things being equal) a higher sensitivity given the known warming over the industrial period. This interaction amplifies the uncertainty of the eventual impact (Schwartz 2018). A recent assessment indicates that this uncertain forcing could be offsetting 20%-50% of the present-day radiative forcing by greenhouse gases (Bellouin et al 2020), highlighting the risk of a substantial global temperature jump if aerosol emissions are halted. More generally, the cumulative emission-global temperature relationship is complicated by non-CO 2 influences and can break down in some circumstances including very strong mitigation (MacDougall and Friedlingstein 2015).
A second issue is that warming patterns observed over recent decades-in particular, strong warming near the Maritime Continent and weak or no warming in large parts of the eastern Pacific-are not reproduced by the coupled climate models underpinning published projections. Atmosphere model and satellite observation studies (Zhou et al 2016) show that these patterns have induced an anomalous increase in average low cloud cover over oceans, which has counteracted some GHG forcing over the historical period. This means that the increase in GMST thus far has been brought about by a smaller effective net radiative forcing than previously thought. This unexpected warming pattern presumably represents unforced multi-decadal variability and/or inaccurately modeled transient responses to rapidly growing radiative forcing (Marvel et al 2018). In either case the pattern is temporary, implying that equilibrium climate sensitivity (Armour 2017, Andrews et al 2018, Marvel et al 2018 and committed warming (Zhou et al 2021) are greater than would otherwise be inferred from the historical record. The size of this effect is highly uncertain and it does not seem to have been explicitly considered in previous projections.
A third issue is that the cumulative-emissions constraint neglects surprise forcings, due either to carbon-cycle 'tipping points' such as releases of GHG from permafrost or biome collapses such as the Amazon rainforest, or to unexpected changes in aerosol for example from volcanic eruptions. These surprises are typically excluded from projections but could occur in reality.
In addition to addressing these three physical issues it is important to employ a probability framework, since both forcings and climate responses are inherently uncertain. Some commitment studies have employed such a framework but have based their probabilities on climate model ensembles (Smith et al 2018). This situation has been changed by a recent community assessment of climate sensitivity (Sherwood et al 2020), hereafter S20, which provided PDFs of sensitivity. Owing to their multivariate sampling approach S20 also obtained joint distributions of any desired set of variables, in particular sensitivity and present-day aerosol forcing. Although previous studies have estimated such joint distributions (Skeie et al 2014, Johansson et al 2015, Mauritsen and Pincus 2017, these have been based only on the historical record. The S20 results for the first time account for uncertain SST pattern effects, and are informed by all important lines of evidence including process understanding, historical warming and paleoclimate changes. This is particularly important since S20 found that the historical record was in fact the weakest constraint on sensitivity, and that CMIP6 GCMs have a broader distribution of sensitivity than implied by observational evidence. The S20 marginal PDF of sensitivity features a 66% range of (2.1, 3.9) • C which is consistent with the 'likely' range assessed by the 2021 IPCC AR6 based on similar reasoning and evidence (Forster et al 2021) and only half as wide as those of previous IPCC reports including their 1.5 • C report. While AR6 has computed remaining carbon budgets in a way that accounts for the newly narrowed range (Canadell et al 2021), it is not clear how they accounted for all of the above issues, and they did not recompute committed warming. Since aerosol forcing is poorly constrained a priori, the S20 joint posterior distribution of aerosol forcing (marginal 66% range (−1.0, −0.4) W m −2 for the time period analyzed here) and climate sensitivity is arguably the most well-informed probability assessment of the two quantities currently available.
Here we combine the S20 results with a simple Earth system model to calculate the PDF of committed warming. Our calculations account for aerosol removal, 'pattern effect' errors in models, and anticipated future carbon cycle behavior, but not potential carbon tipping points or changes in carbon sinks beyond those suggested by the historical trajectory. In omitting these we assume that any such tipping points would be averted if emissions stopped now.

Approach and model
The PDF of any temperature change ∆T can be written as (1) where S is a measure of climate sensitivity taken here to be the effective sensitivity, and F aerh is a forcing taken here as the historical aerosol forcing (present minus preindustrial, i.e. 1850). The first factor in the integrand is the conditional probability distribution of ∆T (again relative to 1850) given S and F aerh , and the second is the joint probability distribution of S and F aerh . We take P(S, F aerh ) from the baseline calculation of S20, who also give a discussion of the effective sensitivity and why it was favored as a useful measure; in particular the distinction between effective and equilibrium sensitivity is relatively small and emerges on time scales longer than those relevant here. Note that while we use S here rather than the transient response (TCR), our approach implicitly computes the TCR. Also, since the historical forcing uncertainty is dominated by that of aerosols, we follow S20 in not explicitly recognizing the uncertainties of ozone, methane and other forcings and taking the F aerh PDF spread to implicitly include a small contribution from them (since ozone is also short-lived, separating its uncertainty would not affect results). The joint distribution P(S, F aerh ) obtained from the code provided by S20 is shown in figure 1.
This joint distribution confirms that higher S generally implies stronger F aerh . However the relationship is weaker than seen in previous studies (Forest et al 2002, Kiehl 2007). There are two reasons for this. First, the uncertain 'pattern effect' noted above and taken into account in S20 allows a given value of S to be reconcilable with a wider range of F aerh . Thus earlier studies were overconfident in assigning a compact relationship between these quantities. Second, other information unrelated to historical warming (paleoclimate and process information) narrows the ranges of S and F aerh but not the spread in their relationship, therefore reducing their correlation. Nonetheless an important correlation remains, which the present method takes into account.
We calculate P(∆T|S, F aerh ) using a widely used energy balance climate model (EBM) with a twolayer ocean (Gregory 2000), coupled to a simple interactive carbon cycle (see appendix; the model is also available online at carbonator.org). There are five key uncertain parameters in the EBM: S, F aerh , and three ocean parameters, namely the effective heat capacities c u and c l of the upper (mixed layer) and deep ocean and a heat exchange coefficient γ. The carbon cycle model also has uncertain parameters. Our sampling method first identifies four alternative carbon cycle parameter sets that accurately reproduce historical CO 2 , CH 4 and other greenhouse gas (GHG) changes, while spanning the likely range of future CO 2 drawdown rates (because the probabilities determined here prove to be relatively insensitive to carbon-cycle uncertainty, this sample is sufficient, permitting more explicit examination of the temporal behavior of different model configurations). We then obtain a large ensemble of EBM parameter combinations for these four carbon parameter sets. The climate projections generated by this ensemble directly yield P(∆T(t)|S, F aerh ) for future times t via the bivariate marginal sample PDF. This procedure allows determination of a PDF of any quantity of interest subject to the constraints on the parameters imposed by the priors and consistency with historical trends.
A limitation of energy balance models is that they assume the net outward global TOA radiation to be a function solely of global-mean temperature (diminished by global-mean forcing), thus not representing the pattern effect noted above. This limitation is shared by the model employed here. However it has been shown that transient pattern effects on TOA radiation in GCMs can be captured indirectly, at least on centennial time scales, by adjusting the parameters relating the rate of heat exchange between the upper and the deep ocean (Geoffroy et al 2013b). Therefore we implicitly represent the range of plausible historical pattern effects by permitting higher γ and c l values so that the energy sink calculated by the model includes both the pattern-related TOA loss and deepocean uptake; in so doing we assume that the larger 'pattern effect' implied by observations (Marvel et al 2018) can also be captured in this way. Thus, by combining the P(∆T(t)|S, F aerh ) found above with the marginal bi-variate P(S, F aerh ) found by S20 (see appendix), we obtain a posterior PDF of ∆T that implicitly includes the constraints used by S20 on S and F aerh , the pattern effect on transient sensitivity, and the known historical trends of global-mean temperature and GHG amounts.
For the emissions-commitment scenario we set all anthropogenic GHG emissions after 2020 to zero. For simplicity we also eliminate all anthropogenic aerosol and tropospheric ozone forcing after this date, i.e. assuming a complete cessation of net emission of climate-influencing substances from human activities. Emissions cutoff leads to a gradual decrease in forcings by persistent GHG (figure 2(a)) but, because of their short lifetimes, a sudden zeroing of forcings due to ozone and aerosol. We alternatively consider scenarios where emissions are held constant at 2020 levels and stop abruptly in 2030 or 2040.

Results
Immediately after an abrupt cutoff of emissions there is a broad spread of possible temperature changes (figure 2(b)), as found also in some previous commitment studies (Armour and Roe 2011, Schwartz 2018, Smith et al 2018. Because of different possible combinations of climate sensitivity and present-day aerosol forcing (figure 1), the 90% probability range of additional warming above 2020 stretches from zero to nearly 0.5 • C about a decade after cutoff. The uncertainty associated with the CO 2 removal rate by the carbon cycle (represented by the spread of the four curves in each of the three clusters shown) is negligible for the first few decades after cutoff, and therefore does not contribute to the uncertainty in the peak temperature. Instead the uncertainty arises almost entirely from that of historical forcing, as seen from the near-perfect relationship between color (warming) and horizontal position (aerosol forcing) in figure 1(a). Effective ocean mixing and heat capacity parameters exhibit some dependence on aerosol forcing (figures 1(b) and (c)); hence constraining heat uptake and 'pattern effect' processes could indirectly constrain committed warming by constraining the aerosol forcing.
The range of possible near-term warming peaks follows from the magnitudes of the forcings (figures 1, 2(a), and S4). Current forcing by anthropogenic ozone is 0.40 ± 0.23 W m −2 . If aerosol forcing magnitude is at the weak end of the possible range (around 0.2-0.4 W m −2 cooling), then removal of ozone forcing outweighs it, producing an immediate (though slight) net cooling effect. However if aerosol forcing is at the strong end (upwards of 1.0 W m −2 ), then its removal leads to a substantial net warming effect in spite of the loss of ozone forcing. As shown in figure 2(b), some global temperature response to these forcings occurs within a decade. Note that if aerosol, ozone and/or methane emissions cutoff were more gradual, these responses would still occur albeit later.
Temperatures gradually fall off after reaching this peak, for reasons related to other forcings and heat transport. Forcing by persistent GHGs steadily decreases after cutoff. This is dominated at first by methane (due to its ∼11 year lifetime) whose forcing drops by about 0.5 W m −2 within 20 years of the cutoff (figure 2(a)). Forcings by N 2 O and halogen compounds decrease much more slowly; only 20-30 years after cutoff do these forcing decreases, in addition to that of CO 2 , collectively match the impact of methane reduction. The fall in temperatures is due to heat transport from the upper ocean (to deep ocean and space) exceeding the remaining forcing. Removal of anthropogenic sources of both CO 2 and methane hence appears necessary to bring median temperatures later in the century near or below what they were prior to cutoff. While the slow temperature decline is robust to uncertainties, the PDF spread grows with time because uncertainty in climate sensitivity: although not important initially, it becomes increasingly so over time. So does the uncertainty associated with carbon sinks, though remaining much smaller than that associated with the present-day forcing.
The PDF of peak temperature above preindustrial (figure 3) is nearly Gaussian, but with a long tail toward high values due to the possibility of strong aerosol forcing (figure 1). Because of this long probability tail, there is only a 85% chance of remaining below 1.5 • C even with an emissions cutoff in 2020. This chance drops to 45% for a 2030 cutoff and 10% for a 2040 cutoff, since both the mean and spread of the committed warming increase. In other words the chance of exceeding the 1.5 • target is increasing by 3%-4% per year. There is no guarantee of meeting even the 2 • C target, with a 1.5% chance of exceedance after a 2020 cutoff, 6% after a 2030 cutoff, and 17% Figure 2. Time series of forcings and additional warming after 2020 emissions cutoff. In (a), radiative forcings of persistent GHGs are shown relative to their values in 2020; ozone and aerosol forcings, which change instantaneously in the model, not shown. Grey shading shows the probability distribution of the year of maximum committed warming. In (b), five clusters of curves represent the 99th, 95th, 50th, 5th and 1st percentile temperatures relative to that in 2020. These percentiles are conditioned on one of four carbon-cycle model configurations (1-4), shown by the four different colors (light/dark blue shading shows 1-99/5-95 percentile ranges, respectively, for configuration 1). after a 2040 cutoff, i.e. increasing by only about 0.2% per year now but more than 1% per year after a couple of decades. Note however that these exceedances are temporary and that the PDF of temperature in 2100 roughly matches that of the peak temperature for a cutoff one decade earlier. On the other hand if emissions grow between 2020 and the later cutoff dates, rather than remaining fixed as in our calculations, this makes exceedance more probable.
A few important caveats are that we ignore future volcanic eruptions or other unforeseen events, or geoengineering or continued aerosol emissions, any of which could in principle reduce future warming: our calculation simply represents what would happen with no further human activity that would influence climate. We do not distinguish between fossil-fuel related emissions and others such as agriculture. Also, the carbon cycle model does not consider unexpected collapses of sinks that could increase CO 2 and temperatures later in the calculation. Finally, we do not account for unforced natural variability of global-mean temperature which, if included, would introduce some additional spread.

Conclusions
The median trajectory of committed further warming found here reaches a 0.1 • C peak about four years after cutoff, and declines slowly thereafter becoming negative within 10-20 years. It is very similar to that found by the IPCC 1.5C report (Allen et al 2018) even though our calculations reflect substantial improvements in the understanding of climate sensitivity and transient warming, which may have been expected to increase the committed warming (Australian Academy of Science 2021, Zhou et al 2021). The revised equilibrium climate sensitivity does not alter the result significantly because it has only a weak direct effect on the peak committed warming given observations up to the time of cutoff. And although transient pattern effects do matter, they are compensated here by a weakening of the likely aerosol forcings compared to values used previously and in most CMIP climate models (mean of −1.01 W m −2 in CMIP5 (Smith et al 2020) compared to −0.75 W m −2 here). This potentially puzzling robustness might be illuminated by noting that the 'constant concentration' commitment would be much less robust. If for example anthropogenic aerosol eventually disappeared while GHG forcings remained fixed at 3 W m −2 , the 'likely' (66% range) long-term warming above preindustrial would narrow from 1.1 • C to 3.4 • C based on the previous sensitivity range to 2.0 • C-2.9 • C based on S20. Thus, climate sensitivity matters for the impact of future emissions (implicit in the constant-concentration scenario), but only forcing matters when it comes to past emissions.
Because uncertainty in historical forcing remains large, there remains a substantial and underappreciated spread of potential warmings even under the strongest possible mitigation scenarios. The 90% probability range of committed warming is wider than the difference between the 1.5 • C and 2 • C Paris targets; there is a small but non-negligible chance that it is already too late (absent geoengineering) to avoid 1.5 • C at least temporarily, and even a very small chance we already cannot avoid 2 • C. At the same time, two decades of continued present-day emissions would still probably keep the planet under 2 • C, and possibly even under 1.5 • C, if no more emissions were to occur after that. This is roughly consistent with recent 'carbon budget' calculations (IPCC 2021), but depends on all GHG emissions ending, not just CO 2 . Although a possible decarbonization scenario could involve continuing aerosol emissions for the express purpose of avoiding the warming noted here, we consider this to be geoengineering, which carries different ethical and legal implications than emissions associated with economic activity. Continued research into aerosol forcing and transient climate responses is necessary to better constrain how much warming is already inevitable and how rapidly emissions must be cut to achieve a given target.
These results emphasize that precise statements of a year by which it will be 'too late' to avoid dangerous climate change are not tenable. Instead a risk-reward framework is more appropriate, whereby the risk of exceeding various targets grows gradually with every year of delay. One advantage of this framing is that given a monetized cost C of failure to meet a Paris target, one could immediately estimate the expected economic loss per year of delay as RC, where R is the annual increase in failure probability (3%-4% yr −1 in the case of the 1.5 • C target). While the safest time to stop emissions is decades ago, given the uncertainties the expected marginal benefit of decarbonization is large and will continue to increase for decades into the future.

Data availability statement
The data that support the findings of this study are openly available at the following URL/DOI: https:// github.com/alexsengupta/carbonator_commitment_ scenarios.

Acknowledgments
We thank S Wakelam for providing the joint PDF of forcing and sensitivity using the code from S20. S C S was funded by ARC FL150100035. Work by S E S was supported by the U.S. Department of Energy under Contract No. DE-SC0012704. Development of the Carbonator model and web site was supported by the ARC Centre of Excellence for Climate Extremes. The climate model is freely available online at www.carbonator.org. Codes used to produce the results shown here are available at: https://github.com/ alexsengupta/carbonator_commitment_scenarios.

Temperature
We use a standard two-layer energy balance model (Gregory 2000), which calculates changes in upper (T u ) and deep ocean (T l ) temperature based on the following equations (Held et al 2010, Geoffroy et al 2013a: where F is the total effective radiative forcing due to CO 2 , CH 4 , halogens, N 2 O, anthropogenic aerosols, volcanic aerosols, ozone and solar radiation. The forcings and temperatures are defined as departures from preindustrial (1850) values, assumed in steady state. Forcings are computed from modeled atmospheric composition or, in some cases, imposed (see below).
Our study reports changes in mean surface temperature T (ocean surface and 2 m air temperature over land). We assume T = 1.11T u , as changes in ocean surface temperature track T u , while on average land surfaces warm 1.5 times faster than ocean surface temperature globally in models and 1.32-1.43 times in recent observations (Wallace and Joshi 2018), and we account for the fact that the Earth's surface is 71% ocean.

CO 2
The concentration of CO 2 is calculated from a prescribed preindustrial initial state using the marine carbon cycle model of Glotter et al (2014), supplemented by a slightly simplified version of the land carbon model of Svirezhev et al (1999), leading to the following equations: where C is carbon mass with subscripts a, u, l, v, s referring respectively to atmosphere (as CO 2 ), upper ocean, lower ocean, vegetation and soil; E C is the annual net anthropogenic emission of carbon (Meinshausen et al 2011, including via land use); P v is the uptake of atmospheric carbon due to net primary production by vegetation growth, which varies with atmospheric concentration via a parameter a 2 ; k a and k d are respectively exchange coefficients between the atmosphere and upper ocean, and between upper and lower ocean; m is the rate of decay of plant carbon, of which a fraction ε goes to the soil with the remainder going to the atmosphere; δ is the decomposition rate of soil carbon; and d is the ratio of lower to upper ocean volume. Atmosphere-ocean carbon fluxes depend on ocean pH which is a function of upper ocean carbon concentration: H and Alk are the upper ocean hydrogen ion concentration and alkalinity, respectively and k 1 , k 2 are dissociation constants. The atmospheric CO 2 mixing ratio in ppm is C a /2.13. The CO 2 source from methane oxidation is ignored. Note that the five carbon destinations are not independent, since the sum of their tendencies equals E C . Also, the parameter k d is physically related to γ in the EBM but this dependence is ignored here. In Svirezhev et al (1999) several of these exchange parameters were given weak temperatureor time-dependence, which would have very small effects under the scenarios examined and is neglected here. The parameters A and B quantify the partitioning of carbon between atmosphere and upper ocean reservoirs at equilibrium, as derived by Glotter et al (2014). Quantities with subscript (0) refer to preindustrial initial values shown in table 1. As the carbon cycle model employed here is calibrated on prior CO 2 emissions and amounts, it might lose accuracy for CO 2 concentrations substantially greater than at present.

Methane
Methane mixing ratio in ppb obeys according to which it is removed by gas-phase chemical reactions at a time scale E CH4 is the source of anthropogenic methane in Tg CH 4 yr −1 . We determine τ CH4(0) and α by fitting to historical atmospheric mixing ratios given historical emissions. Although the uncertainty in CH 4 emissions affects the calibration, this uncertainty would have very little effect on the peak warming, and is therefore neglected here.

Other persistent GHGs
We specify historic N 2 O concentrations, and assume that concentration decays exponentially after emissions cutoff with a 109-year time constant. For delayed cutoff scenarios, concentrations are held constant between 2020 and the emissions cutoff, then decay. Halogen concentrations follow the RCP8.5 scenario (Meinshausen et al 2011) for gases controlled under the Montreal Protocol which already assumes emissions are small and declining rapidly by 2020.

Other forcings
Historical forcings due to ozone, anthropogenic aerosol, and volcanic aerosol are imposed based on RCP time series (Meinshausen et al 2011). Those of ozone and aerosol are rescaled to yield a given change over the historical period, equal to F aerh in the case of anthropogenic aerosol and 0.40 W m −2 in the case of ozone (again from AR6). Any albedo changes due to land use change are neglected. Overall forcing uncertainty is dominated by anthropogenic aerosol, so this one is treated as a random variable in our sample while others are given fixed, best-estimate values.

Sampling approach
Past and future CO 2 behavior are determined differently by carbon cycle model parameters, so to address uncertainty in future behavior, we choose four parameter sets (table 1) that all produce the same (observed) past behavior but span the range of drawdown behavior found in the ZECMIP model intercomparison study under a scenario similar to the commitment one investigated here (MacDougall et al 2020). To do this we first generate a random sample of all parameters within ±25% of their default values (sample size 500 000 members), perform historical runs with this sample, and discard all members whose historical CO 2 trends differ from observed by more than ±1 ppm or whose RMSE over the historical interval >6 ppm (see supplemental figure S1 (available online at stacks.iop.org/ERL/17/064022/mmedia)). We then compute the ZECMIP CO 2 decay scenario for the remaining members, and manually select four that span most of the range of decays reported (see supplemental figure S2). Each of these carbon sets is then used to repeatedly calculate putative climate trajectories starting at 1850 and running through 2020. For each calculation, the five uncertain EBM parameters are drawn randomly (uniformly and independently) from within their allowed ranges (see table 1). Trajectories whose temperature difference between 1861-1880 and 2006-2018 falls within the likely observed range of 0.89-1.17 K (IPCC 2021) are considered successful, continued through 2100, and set aside. This repeats until there are 100 000 successful runs (see supplemental figure S3). The allowed prior ranges for the EBM ocean parameters are determined by starting with ranges inferred from CMIP climate model behavior by Geoffroy et al (2013a), then expanding those of γ and c L until successful ensemble members are found for all plausible combinations of {S, F aerh }. It is found that this procedure requires a span of possible γ and c l values extending higher than those seen in CMIP models (see figure 1).
Similarly to previous work (e.g. Smith et al 2018), to implement (1) we randomly generate 25 000 pairs {S, F aerh } from the S20 joint distribution, and for each one, find the closest successful model run (sampling with replacement). This yields a 25 000 member sample of model runs having the same P(S, F aerh ) as S20, and each consistent with historical trends. The sample ∆T in this ensemble accordingly obeys (1). This approach is Bayesian, with independent, restricted uniform priors on ocean model parameters; a uniform 25% chance of each of the four chosen carboncycle parameter sets; a specified S20 bivariate prior on S and F aerh , itself determined by a Bayesian analysis of evidence mostly not considered here; and a top-hat likelihood on historical warming centered on the best estimate. Finally we note that the results are insensitive to the carbon cycle compared to the EBM, forcing, and feedbacks, especially as the procedure does not include climate feedback on the carbon cycle.