Inferring the number of COVID-19 cases from recently reported deaths

We estimate the number of COVID-19 cases from newly reported deaths in a population without previous reports. Our results suggest that by the time a single death occurs, hundreds to thousands of cases are likely to be present in that population. This suggests containment via contact tracing will be challenging at this point, and other response strategies should be considered. Our approach is implemented in a publicly available, user-friendly, online tool.


Introduction
As the coronavirus-2019 (COVID-19, 1 ) epidemic continues to spread worldwide, there is mounting pressure to assess the scale of epidemics in newly affected countries as rapidly as possible. We introduce a method for estimating cases from recently reported COVID-19 deaths. Results suggest that by the time the first deaths have been reported, there may be hundreds to thousands of cases in the affected population. We provide epidemic size estimates for several countries, and a user-friendly, web-based tool that implements our model 2 .

Methods
Using deaths to infer cases COVID-19 deaths start to be notified in countries where few or no cases had previously been reported 3 . Given the nonspecific symptoms 4 , and the high rate of mild disease 5 , a COVID-19 epidemic may go unnoticed in a new location until the first severe cases or deaths are reported 6 . Available estimates of the case fatality ratio, i.e. the proportion of cases that are fatal (CFR, 7,8 ), can be used to estimate the number of cases who would have shown symptoms at the same time as the fatal cases. We developed a model to use CFR alongside other epidemiological factors underpinning disease transmission to infer the likely number of cases in a population from newly reported deaths.
Our approach involves two steps: first, reconstructing historic cases by assuming non-fatal cases are all undetected, and, second, model epidemic growth from these cases until the present day to estimate the likely number of current cases. We account for uncertainty in the epidemiological processes by using stochastic simulations for estimation of relevant quantities.
Two pieces of information are needed to reconstruct past cases: the number of cases for each reported death, and their dates of symptom onset. Intuitively, the CFR provides some information on the number of cases, as it represents the expected number of deaths per case, so that CFR -1 corresponds to the expected number of cases per death. In practice, the number of cases until the first reported death can be drawn from a Geometric distribution with an event probability equal to the CFR. Note that while our approach could in theory use different CFR for each case (to account for different risk groups), our current implementation uses the same CFR for all cases in a simulation. Dates of symptom onset are simulated from the distribution of the time from onset to death, modelled as a discretised Gamma distribution with a mean of 15 days and a standard deviation of 6.9 days 9 .
Once past cases are reconstructed, we use a branching process model for forecasting new cases 10,11 . This model combines data on the reproduction number (R) and serial interval distribution to simulate new cases 'y t ' on day 't' from a Poisson distribution: where w(.) is the probability mass function of the serial interval distribution. More details on this simulation model can be found in Jombart et al. 11 . Optionally, this model can also incorporate heterogeneity in transmissibility using a Negative Binomial distribution instead of Poisson. The serial interval distribution was characterized as a discretized Lognormal distribution with mean 4.7 days and standard deviation 2.9 days 12 . We assume that past cases caused secondary transmissions independently (i.e. are not ancestral to each other), so that simulated cases for each death can be added. This assumption is most likely to be met when reported deaths are close in time. As the time between reported deaths increases, past cases may come from the same epidemic trajectory rather than separate, additive ones, in which case our method would overpredict epidemic size. Further details on model design and parameters values are provided in Supplementary Material. Our approach is implemented in the R software 13 and publicly available as R scripts (see Extended data) 14 , as well as in a user-friendly, interactive web-interface available at: https://cmmid.github.io/visualisations/ inferring-covid19-cases-from-deaths 2 .

How many cases for a single death?
We first used our model to assess likely epidemic sizes when an initial COVID-19 death is reported in a new location. We ran simulations for a range of plausible values of R (1.5, 2 and 3) and CFR (1%, 2%, 3% and 10%), assuming a single death on the 1st March 2020 8 . 25,000 epidemic trajectories were simulated for each parameter combination. Simulations for an 'average severity' scenario 8 with R = 2 and CFR = 2% show that by the time a death has occurred, hundreds to thousands of cases may have been generated in the affected population ( Figure 1). Results vary widely across other parameter settings, and amongst simulations from a given setting (Table 1), with higher R and lower CFR leading to higher estimates of the numbers of cases. However, a majority of settings give similar results to our 'average' scenario, suggesting that a single death is likely to reflect several hundreds of cases. Results were qualitatively unchanged when incorporating heterogeneity in the model using recent estimates 15 , but prediction intervals were wider (Extended data).

Recently affected countries
We applied our approach to three countries which recently reported their first COVID-19 deaths (Spain, Italy, and France), using the same range of parameters as in the single-death analysis. In order to compare predictions to cases actually reported in these countries, projections were run until 4th March. Overall, predictions from the model using the baseline scenario (R = 2, CRF = 2%) were in line with reported epidemic sizes (

Discussion
Several limitations need to be considered when applying our method. First, our approach only applies to the deaths of patients who have become symptomatic in the  location considered, which should usually be the case in places where traveler screening is in place. We also assume constant transmissibility (R) over time, which implies that behavior changes and control measures have not taken place yet, and that there is no depletion of susceptible individuals. Consequently, our method should only be used in the early stages of a new epidemic, where these assumptions are reasonable. Similarly, the assumption that each death reflects independent, additive epidemic trajectories is most likely to hold true early on, when reported deaths are close in time (e.g. no more than a week apart). Used on deaths spanning longer time periods, our approach is likely to overestimate epidemic sizes.
Contact tracing has been shown to be an efficient control measure when imported cases can be detected early on 16, in addition to permitting the estimation of key epidemiological parameters 12 . When the first cases reported in a new location are mostly deaths, however, our results suggest that theunderlying size of the epidemic would make control via contact tracing extremely challenging. In such situations, efforts focusing on social distancing measures such as schoolclosures and self-isolation may be more likely to mitigate epidemic spread.

Data availability
Underlying data All data underlying the results are available as part of the article and no additional source data are required. This project contains the file 'extended_data' (PDF), which contains supplemental information and methodological details regarding the model described in this article.

Extended data
Extended data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
Source code and R scripts available at: https://github.com/thibautjombart/covid19_cases_from_deaths. CMMID COVID-19 Working Group gave input on the method, contributed data and provided elements of discussion.
All authors read and approved the final version of the manuscript.

Centre for the Mathematical Modelling of Infectious Diseases COVID-19 Working Group
The following authors were part of the Centre for Mathematical Modelling of Infectious Disease 2019-nCoV working group: Mark Jit, Charlie Diamond, Fiona Sun, Billy J Quilty, Kiesha Prem, Nicholas Davies, Stefan Flasche, Alicia Rosello, James D Munday, Petra Klepac, Joel Hellewell. Each contributed in processing, cleaning and interpretation of data, interpreted findings, contributed to the manuscript, and approved the work for publication.
All authors read and approved the final version of the manuscript. to reconstruct the past history to know how much trouble one is currently in?
What would the effect of a heterogeneous CFR be? (I believe this would correspond e.g. to a 'beta-Geometric distribution', unless one instead wanted to treat it as a finite mixture of probabilities for discrete risk categories).
It would be nice to have a little more detail (i.e. a few sentences) on the simulation procedure. I see how to get from CFR and deaths to a total number of preceding cases, and how to simulate times of symptom onset for the observed deaths. It's not completely obvious to me how to get from there to 'history of past cases' (i.e. incidence over time); does one run the renewal process backward in time? Or use branching-process theory to find the time distribution of symptom onset of the index case given the current size of the epidemic?
Please clarify "We assume that past cases caused secondary transmissions independently (i.e. are not ancestral to each other), so that simulated cases for each death can be added." Does this mean that you assume that all observed deaths are from separate lineages/transmission chains? (The last sentence of the paragraph suggests that, but the initial statement could probably be clearer.) (Does this assumption even matter if we are in the branching-process regime?).
I appreciate that the authors are trying to keep things simple, and thus the scenario-based approach (try the model for a range of CFR/R values and see what is implied) is useful. I note that the confidence intervals are already very wide (that's part of the point), but there are several quantities that are treated as known (delay distribution, serial interval distribution); I wonder how sensitive the results are to these assumptions (probably not much -I'm guessing that with R specified they might only change the timing, not the numbers). Given that the authors are already basing the answers on 25,000 solutions, it might not be too hard to construct point estimates and intervals based on a prior/uncertainty distribution of R and CFR (rather than constructing separate scenarios), and allowing for uncertainty in the delay and serial distributions.

Is the rationale for developing the new method (or application) clearly explained? Yes
Is the description of the method technically sound? Yes