Parameter estimation and uncertainty quantification for an epidemic model, in preparation

We examine estimation of the parameters of Susceptible-Infective-Recovered (SIR) models in the context of least squares. We review the use of asymptotic statistical theory and sensitivity analysis to obtain measures of uncertainty for estimates of the model parameters and the basic reproductive number (R0)---an epidemiologically significant parameter grouping. We find that estimates of different parameters, such as the transmission parameter and recovery rate, are correlated, with the magnitude and sign of this correlation depending on the value of R0. Situations are highlighted in which this correlation allows R0 to be estimated with greater ease than its constituent parameters. Implications of correlation for parameter identifiability are discussed. Uncertainty estimates and sensitivity analysis are used to investigate how the frequency at which data is sampled affects the estimation process and how the accuracy and uncertainty of estimates improves as data is collected over the course of an outbreak. We assess the informativeness of individual data points in a given time series to determine when more frequent sampling (if possible) would prove to be most beneficial to the estimation process. This technique can be used to design data sampling schemes in more general contexts.

1. Introduction.The use of mathematical models to interpret disease outbreak data has provided much insight into the epidemiology and spread of many pathogens, particularly in the context of emerging infections.The basic reproductive number, R 0 , which gives the average number of secondary infections that result from a single infective individual over the course of their infection in an otherwise entirely susceptible population (see, for example, [1] and [20]), is often of prime interest.In many situations, the value of R 0 governs the probability of the occurrence of a major outbreak, the typical size of the resulting outbreak and the stringency of control measures needed to contain the outbreak (see, for example [10,26,30]).
While it is often simple to construct an algebraic expression for R 0 in terms of epidemiological parameters, one or more of these values is typically not obtainable by direct methods.Instead, their values are usually estimated indirectly by fitting a mathematical model to incidence or prevalence data (see, for example, [3,12,32,38,41,42]), obtaining a set of parameters that provides the best match, in some sense, between model output and data.It is, therefore, crucial that we have a good understanding of the properties of the process used to fit the model and its limitations when employed on a given data set.An appreciation of the uncertainty accompanying the parameter estimates, and indeed whether a given parameter is even individually identifiable based on the available data and model, is necessary for our understanding.
The simultaneous estimation of several parameters raises questions of parameter identifiability (see, for example, [2,6,17,22,24,27,36,43,44,45,46]), even if the model being fitted is simple.Oftentimes, parameter estimates are highly correlated: the values of two or more parameters cannot be estimated independently.For instance, it may be the case that, in the vicinity of the best fitting parameter set, a number of sets of parameters lead to effectively indistinguishable model fits, with changes in one estimated parameter value being able to be offset by changes in another.
Even if individual parameters cannot be reliably estimated due to identifiability issues, it might still be the case that a compound quantity of interest, such as the basic reproductive number, can be estimated with precision.This would occur, for instance, if the correlation between the estimates of indivdual parameters was such that the value of R 0 varied little over the sets of parameters that provided equal quality fits.
Statistical theory is often used to guide data collection, with sampling theory providing an idea of the amount of data required in order to obtain parameter estimates whose uncertainty lies within a range deemed to be acceptable.In timedependent settings, sampling theory can also provide insight into when to collect data in order to provide as much information as possible.Such analyses can be extremely helpful in biological settings where data collection is expensive, ensuring that sufficient data is collected for the enterprise to be informative, but in an efficient manner, avoiding excessive data collection or the collection of uninformative data from certain periods of the process.
In this paper we discuss the use of sensitivity analysis [21,37] and asymptotic statistical theory [18,39], to quantify the uncertainties associated with parameter estimates obtained by the use of least squares model fitting in an epidemiological context.The theory also quantifies the correlation between estimates of the different parameters, and we discuss the implications of correlations on the estimation of R 0 .We investigate how the magnitude of uncertainty varies with both the number of data points collected and their collection times.We suggest an approach that can be used to identify the times at which more intensive sampling would be most informative in terms of reducing the uncertainties associated with parameter estimates.
In order to make our presentation as clear as possible, we throughout employ the simplest model for a single outbreak, the SIR model, and use synthetic data sets generated using the model.This idealized setting should be the easiest one for the estimation methodology to handle, so we imagine that any issues that arise (such as non-identifiability of parameters) would carry over to, and indeed be more delicate for, more realistic settings such as more complex models or real-world data sets.The use of synthetic data allows us to investigate the performance and behavior of the estimation for infections that have a range of transmission potentials, providing a broader view of the estimation process than would be obtained by focusing on a particular individual data set.
The paper is organized as follows: the simple SIR model employed in this study is outlined in Section 2. The statistical theory and sensitivity analysis of the model is presented in Section 3. Section 4 discusses the synthetic data sets that we use to demonstrate the approach.Section 5 presents the results of model fitting, and discusses the estimation of R 0 .The impact of sampling frequency and sampling times are examined in Section 6. Section 7 explores parameter identifiability for the SIR model.We conclude with a discussion of the results.
2. The model.Since our aim here is to present an examination of general issues surrounding parameter estimation, we choose to use a simple model containing a small number of parameters.We employ the standard deterministic Susceptible-Infective-Recovered compartmental model (see, for example, [1,19,25]) for an infection that leads to permanent immunity and that is spreading in a closed population (i.e., we ignore demographic effects).The population is divided into three classes, susceptible, infectious and recovered, whose numbers are denoted by S, I, and R, respectively.The closed population assumption leads to the total population size, N , being constant and we have S + I + R = N .
We assume that transmission is described by the standard incidence term βSI/N , where β is the transmission parameter, which incorporates the contact rate and the probability that contact (between an infective and susceptible) leads to transmission.Individuals are assumed to recover at a constant rate, γ, which gives the average duration of infection as 1/γ.
Because of the equation S + I + R = N , we can determine one of the state variables in terms of the other two, reducing the dimension of the system.Here, we choose to eliminate R, and we so focus our attention on the dynamics of S and I.The model can then be described by the following differential equations together with the initial conditions S(0) = S 0 , I(0) = I 0 .
The behavior of this model is governed by the basic reproductive number.For this SIR model, R 0 = β/γ.The average number of secondary infections per individual at the beginning of an epidemic is given by the product of the rate at which new infections arise (β) and the average duration of infectiousness (1/γ).R 0 tells us whether an epidemic will take off (R 0 > 1) or not (R 0 < 1) in this deterministic framework.
This SIR model is formulated in terms of the number of infectious individuals, I(t), i.e., the prevalence of infection.Disease outbreak data, however, is typically reported in terms of the number of new cases that arise in some time interval, i.e., the disease incidence.The incidence of infection over the time interval (t i−1 , t i ) is given by integrating the rate of infection over the time interval: ti ti−1 βS(t)I(t)/N dt.Notice that, since the SIR model does not distinguish between infectious and symptomatic individuals-even though this is not the case for many infections-we equate the incidence of new infections and new cases.For the simple SIR model employed here, the incidence can be calculated by the simpler formula S(t i−1 ) − S(t i ), since the number of new infections is given by the decrease in the number of susceptibles over the interval of interest.

Methodology.
Estimating the parameters of the model given a data set (solving the inverse problem) is here accomplished by using either ordinary least squares (OLS) or a weighted least squares method known as either iteratively reweighted least squares or generalized least squares (GLS) [18].Uncertainty quantification is then performed using asymptotic statistical theory (see, for example, Seber and Wild [39]) applied to the statistical model that describes the epidemiological data set.Although the application of this theory to epidemiological settings has been developed and explained in a number of previous works (see, for example, [3,15,16]), to aid the reader we provide a brief general summary of this theory.In order to facilitate comparison with previous papers cowritten by us, we largely follow the development and notation laid out in [3,15,16], albeit with a few notational deviations and changes in emphasis.
The statistical model assumes that the epidemiological system is exactly described by some underlying dynamic model (for us, the deterministic SIR model) together with some set of parameters, known as the true parameters, but that the observed data arises from some corruption of the output of this system by noise (e.g., observational errors).We write the true parameter set as the p-element vector θ 0 , noting that some of these parameters may be initial conditions of the dynamic model if one or more of these are unknown.The n observations of the system, Y 1 , Y 2 , . . ., Y n , are made at times t 1 , t 2 , . . ., t n .We assume the statistical model can be written as where M (t i ; θ 0 ) is our deterministic model (either for prevalence or incidence, as appropriate) evaluated at the true value of the parameter, θ 0 , and the E i depict the errors.We write The appropriate estimation procedure depends on the properties of the errors E i .We assume that the errors have the following form where ξ ≥ 0. The i are assumed to be independent, identically distributed random variables with zero mean and (finite) variance σ 2 0 .The random variables Y i have means given by E(Y i ) = M (t i ; θ 0 ) and variances Var(Y i ) = M (t i ; θ 0 ) 2ξ σ 2 0 .If ξ is taken to equal 0 then E i = i , and the error variance is assumed to be independent of the magnitude of the predicted value of the observed quantity.This noise structure is often termed absolute noise in the literature.Positive values of ξ correspond to the assumption that the error variance scales with the predicted value of the quantity being measured.If ξ = 1, the standard deviation of the noise is assumed to scale linearly with M : the average magnitude of the noise is a constant fraction of the true value of the quantity being measured.This situation is often referred to as relative noise.If, instead, ξ = 1/2, the variance of the error scales linearly with M : we refer to this as Poisson noise.
The least squares estimator θLS is a random variable obtained by consideration of the cost functional in which the weights w i are given by If ξ = 0, then w i = 1 for all i, and in this case the estimator is obtained by minimizing J(θ|Y ), that is θLS = arg min θ J(θ|Y ).(7) In this case, known as ordinary least squares (OLS), all data points are of equal importance in the fitting process.When ξ > 0, the weights lead to more importance being given to data points that have a lower variability (i.e., those corresponding to smaller values of the model).If the values of the weights were known ahead of time, estimation could proceed by a weighted least squares minimization of the cost functional (5).The weights, however, depend on θ and so an iterative process is instead used, employing estimated weights.An initial ordinary (unweighted) least squares is carried out and the resulting model is used to provide an initial set of weights.Weighted least squares is then carried out using these weights, providing a new model and hence a new set of weights.The weighted least squares step is repeated with successively updated weights until some termination criterion, such as the convergence of successive estimates to within some specified tolerance, is achieved [18].
The asymptotic statistical theory, as detailed in [18,39], describes the distribution of the estimator θLS = θ(n) LS as the sample size n → ∞.(In this paragraph we include the superscript n to emphasize sample size dependence.)Provided that a number of regularity and sampling conditions are satisfied (discussed in detail in [39]), this estimator has a p-dimensional multivariate normal distribution with mean θ 0 and variance-covariance matrix Σ 0 given by where Ω So, θLS ∼ N (θ 0 , Σ 0 ).We note that existence and invertibility of the limiting matrix Ω 0 = lim n→∞ Ω (n) 0 is required for the theory to hold.In Equation ( 9), W (n) (θ) is the diagonal weight matrix, with entries w i , and χ (n) (θ) is the n × p sensitivity matrix, whose entries are given by Because we do not have an explicit formula for M (t i ; θ), the sensitivities must be calculated using the so-called sensitivity equations.As outlined in [21,37], for the general m-dimensional system ẋ = F (x, t; θ), with state variable x ∈ R m and parameter θ ∈ R p , the matrix of sensitivities, ∂x/∂θ, satisfies d dt with initial conditions ∂x(0) ∂θ = 0 m×p .
Here, ∂F/∂x is the Jacobian matrix of the system.This initial value problem must be solved simultaneously with the original system (11).Sensitivity equations for the state variables with respect to initial conditions can be derived in a similar way, except that the second term on the right side of Equation ( 12) is absent and the appropriate matrix of initial conditions is I m×m .The sensitivity equations for the specific case of the SIR model of interest here are presented in the appendix.
Because the true parameter θ 0 is usually not known, we use the estimate of θ in its place in the estimation formulae.The value of σ 2 0 is approximated by where the factor 1/(n − p) ensures that the estimate is unbiased.The matrix provides an approximation to the covariance matrix Σ 0 .
Standard errors for the components of the estimator θLS are approximated by taking square roots of the diagonal entries of Σ, while the off-diagonal entries provide approximations for the covariances between pairs of these components.The uncertainty of an estimate of an individual parameter is conveniently discussed in terms of the coefficient of variation (CV), that is the standard error of an estimate divided by the estimate itself.The dimensionless property of the CV allows for easier comparison between uncertainties of different parameters.In a related fashion, the covariances can be conveniently normalized to give correlation coefficients, defined by The asymptotic statistical theory provides uncertainties for individual parameters, but not for compound quantities-such as the basic reproductive number-that are often of interest.For instance, if we had the estimator θLS = ( β, γ) T , a simple point estimate for R 0 would be β/γ, where β and γ are the realized values of β and γ.To understand the properties of the corresponding estimator we examine the expected value and variance of the estimator β/γ.Because this quantity is the ratio of two random variables, there is no simple exact form for its expected value or variance in terms of the expected values and variances of the estimators β and γ.Instead, we have to use approximation formulas derived using the method of statistical differentials (effectively a second order Taylor series expansion, see [29]), and obtain and Here we have made use of the fact that E( β) = β 0 , the true value of the parameter, and E(γ) = γ 0 .The variance equation has previously been used in an epidemiological setting by Chowell et al [13].Equation (17), however, shows us that estimation of R 0 by dividing point estimates of β and γ provides a biased estimate of R 0 .The bias factor can be written in terms of the correlation coefficient and coefficients of variation giving This factor only becomes important when the CVs are on the order of one.In such a case, however, the estimability of the parameters is already in question.Thus, under most useful circumstances, estimating R 0 by the ratio of point estimates of β and γ suffices.
4. Generation of synthetic data, model fitting and estimation.In order to facilitate our exploration of the parameter estimation problem, we choose to use simulated data.This 'data' is generated using a known model, a known parameter set and a known noise structure, putting us in an idealized situation in which we know that we are fitting the correct epidemiological model to the data, that the correct statistical model is being employed and where we can compare the estimated parameters with their true values.Furthermore, since we know the noise process, we can generate multiple realizations of the data set and hence directly assess the uncertainty in parameter estimates by fitting the model to each of the replicate data sets.As a consequence, we can more completely evaluate the performance of the estimation process than would be possible using a single real-world data set.The use of synthetic data also allows us to investigate parameter estimation for diseases that have differing levels of transmissibility.We considered three hypothetical infections, with low, medium and high transmissibility, using R 0 values of 1.2, 3 and 10, respectively.In each case we took the recovery rate γ to equal 1, which corresponds to measuring time in units of the average infectious period.The value of β was then chosen to provide the desired value of R 0 .(In terms of the "true values" of our statistical model, we have γ 0 = 1 and β 0 = R 0 ).We took a population size of 10, 000, of which 100 people were initially infectious, with the remainder being susceptible.(Altering the initial number of infectives makes no qualitative difference to the results that follow.) The model was solved for S and I using the MATLAB ode45 routine, starting from t = 0, giving output at n + 1 evenly spaced time points (0, t 1 , ..., t n ).The duration of the outbreak depends on R 0 and so, in order to properly capture the time scale of the epidemic, we choose t n to be the time at which I(t) falls back to its initial value.A data set for prevalence was then obtained by adding noise generated by multiplying independent draws, e i , from a normal distribution with mean zero and variance σ 2 0 by I(t i , θ 0 ) ξ .Thus, our data, satisfies the assumptions made in Section 3 and allows us to apply the asymptotic statistical theory.Notice that, for convenience, we have chosen normally distributed e i , but we re-emphasize that the asymptotic statistical theory does not require this.Data sets depicting incidence of infection can be created in a similar way, replacing I(t i ) by S(t i ) − S(t i−1 ), as discussed above, for i = 1, . . ., n.Three different values of ξ, namely ξ = 0 (absolute noise), ξ = 1/2 (Poisson noise) and ξ = 1 (relative noise), were used to generate synthetic data sets.Given that prevalence (or incidence) increases with R 0 , the use of absolute noise, with the same value of σ 2 0 across the three transmissibility scenarios, leads to noise being much more noticeable for the low transmissibility situation.This complicates comparisons of the success of the estimation process between differing R 0 values.Visual inspection of real-world data sets, however, indicates that variability increases with either prevalence or incidence [23].If this variability reflected reporting errors, with individual cases being reported independently with some fixed probability, the variance of the resulting binomial random variable would be proportional to its mean value.As a result, we direct most of our attention to data generated using ξ = 1/2.
Because we know the true values of the parameters and the variance of the noise, we can calculate the variance-covariance matrix Σ 0 (Equation 8) exactly, without having to use estimated parameter values or error variance.This provides a more reliable value than that obtained using the estimate Σ, allowing us to more easily detect small changes in standard errors, such as those that occur when a single data point is removed from or added to a data set as we do in Section 6.This approach was employed to obtain many of the results that follow (in each instance, it will be stated whether Σ 0 or Σ was used to provide uncertainty estimates).

5.
Results: Parameter estimation.We could attempt to fit any combination of the parameters and initial conditions of the SIR model, i.e., β, γ, N , S 0 and I 0 .We shall concentrate, however, on the simpler situation in which we just fit β and γ, imagining that the other values are known.This might be the case if a new pathogen were introduced into a population at a known time, so that the population was known to be entirely susceptible apart from the initial infective.Importantly, the estimation of β and γ allows us to estimate the value of R 0 .We shall return to consider estimation of three or more parameters in a later section.
The least squares estimation procedure works well for synthetic data sets generated using the three different values of R 0 (results not shown).Diagnostic plots of the residuals were used to examine potential departures from the assumptions of the statistical model: unsurprisingly, none were seen when the value of ξ used in the fitting process matched that used to generate the data, and clear deviations were seen when the incorrect value of ξ was used in the fitting process (results not shown).
Table 1.Coefficients of variation (CV) for parameter estimates of β, γ, R 0 , and the correlation coefficient between β and γ, ρ β,γ .The coefficients of variation and correlation coefficient were obtained from the asymptotic stastical theory where the variance-covariance matrix Σ 0 was calculated exactly (i.e., no curve-fitting was carried out).Calculations were done under a Poisson noise structure, ξ = 1/2, with σ 2 0 = 1, and n = 50 data points.Parameter values and initial conditions used were β = R 0 , γ = 1, N = 10, 000, S 0 = 9900, and I 0 = 100.-0.3122 -A Monte Carlo approach can be used to verify the distributional results of the asymptotic statistical theory.A set of point estimates of the parameter (β, γ) was generated by applying the estimation process to a large number of replicate data sets generated using different realizations of the noise process, allowing estimates of variances and covariances of parameter estimates to be directly obtained.Unsurprisingly, good agreement was seen when the correct value of ξ was employed in the estimation process and the distribution of (β, γ) estimates appears to be consistent with the appropriate bivariate normal distribution predicted by the theory.

Parameter Value
Table 1 and Figure 1a demonstrate that estimates of β and γ are correlated, with the sign and magnitude of the correlation coefficient depending strongly on the value of R 0 .Standard errors for the estimates also depend strongly on the value of R 0 (Figure 1b).As R 0 approaches 1, the correlation coefficient approaches 1 and the standard errors become extremely large.It is, therefore, difficult to obtain good estimates of the individual parameters in this case.Examination of the cost functional J in the (γ, β) plane reveals the origin of the strong correlation and large standard errors (Figure 2a).Near its minimum value, the contours of J are well approximated by long thin ellipses whose major axes are oriented along the line β = R 0 γ.Thus there is a considerable range of β and γ values that give almost identical model fits, but for which the ratio β/γ varies relatively little.In a later section we shall see that these long thin elliptical contours arise as a consequence of sensitivities of the model to changes in β and γ being almost equal in magnitude but of opposite signs.(The derivation of these contour curves can be found in [9].) For values of R 0 that lead to lower correlation between estimates of β and γ, the contours of J near its minimum point are closer to being circular and are less tilted (Figure 2b), allowing for easier identification of the two individual parameters.The standard error for the estimate of γ is seen to decrease with R 0 , while that of β exhibits non-monotonic behavior.For a fixed value of γ, increasing R 0 leads to more rapid spread of the infection and hence an earlier and higher peak in prevalence (Figure 3).For large values of R 0 , the majority of the transmission events occur over the timespan of the first few data points, meaning that fewer points within the data set are informative regarding the spread of the infection.Consequently, it becomes increasingly difficult to estimate β as R 0 is increased beyond some critical value.As seen in Table 1, estimates of β and γ have relatively large uncertainties when R 0 is small.It would, for instance, be difficult to accurately estimate the average duration of infection, 1/γ, for an infection such as seasonal influenza-which is typically found to have R 0 about 1.3 (ranging from 0.9 to 2.1) [14]-using the least squares approach.Importantly, however, the estimate of R 0 has a much lower variation (as measured by the CV) than the estimates of β and γ.The strong positive correlation between the estimates of β and γ reduces the variance of the R 0 estimate, as can be seen in Equation (18), and reflecting the earlier observation concerning the orientation of the contours of the cost functional along lines of the form β = R 0 γ.

6.
Results: Sampling schemes and uncertainty of estimates.Biological data is often difficult or costly to collect, so it is desirable to collect data in such a way to maximize its informativeness.Consequently it is important to understand how parameter estimation depends on the number of sampled data points and the times at which the data are collected.This information can then be used to guide future data collection.In this section we examine two approaches to address this question: sensitivity analysis and data sampling.6.1.Sensitivity.The sensitivities of a system provide temporal information on how states of the system respond to changes in the parameters [21,37].They can, therefore, be used to identify time intervals where the system is most sensitive to such changes.Noting that the sensitivities are used to calculate the standard errors in estimates of parameters, direct observation of the sensitivity function provides an indication of time intervals in which data points carry more or less information for the estimation process [4,5].For instance, if the sensitivity to some parameter is close to zero in some time interval, changes in the value of the parameter would have little impact on the state variable.Conversely, more accurate knowledge of the state variable at that time could not cause the estimated parameter value to change by much.
For low values of R 0 , for example R 0 = 1.2, we see that the sensitivity functions of I(t) with respect to β and γ are near mirror images of each other (Figure 3a).This mirror image phenomenon allows a change in one parameter to be easily compensated by a corresponding change in the other parameter, giving rise to the strong correlation between the estimates of the two parameters.Early in the epidemic, we see a similar phenomenon for all values of R 0 .We comment further on this observation in the next section.
As R 0 increases, the two sensitivity functions take on quite different shapes.Prevalence is much less sensitive to changes in β than to changes in γ.The sensitivity of prevalence to β is greatest right before the epidemic peak, before becoming negative, but small, during the late stages of the outbreak.The sensitivity becomes negative because an increase in β would cause the peak of the outbreak to occur earlier, reducing the prevalence at a fixed, later time.I remains sensitive with respect to γ throughout much of the epidemic, reaching its largest absolute value slightly later than the time at which the outbreak peaks.
While the sensitivity functions provide an indication of when additional, or more accurate data, is likely to be informative, they have clear limitations, not least because they do not provide a quantitative measure of how uncertainty estimates, such as standard errors, are impacted.Being a univariate approach they cannot account for any impact of correlation between parameter estimates, as we shall see below, although they can indicate instances in which parameter estimates are likely to be correlated.Furthermore, they do not account for the different weighting accorded to different data points on account of the error structure of the model, such as the relationship between error variance and the magnitude of the observation being made.Another type of sensitivity function, the generalized sensitivity function (GSF) introduced by Thomaseth and Cobelli [40], which is based on the Fisher information matrix, does account for these two factors.While the GSF does provide qualitative information that can guide data collection, its interpretation is not without its own complications [4] and, given that we found that it provided little additional insight in the current setting, we shall not discuss it further here.6.2.Data sampling.In order to gain quantitative information about sampling schemes on parameter estimation, as opposed to the qualitative information provided by inspection of the sensitivity functions, we carried out three numerical experiments in which different sampling schemes were implemented.The first approach involves altering the frequency at which data are sampled within a fixed observation window that covers the duration of the outbreak.The second approach considers sampling at a fixed frequency but over observation windows of differing durations.The third approach examines increasing the sampling frequency within specified sub-intervals of a fixed observation window.
In the first sampling method we alter the frequency at which observations are taken while keeping the observation window fixed.In other words, we increase n while fixing t 0 = 0 and t n = t end .For incidence data, increasing the observation frequency-i.e., reducing the period over which each observation is made-has the important effect of reducing the values of the observed data and the corresponding model values.Under relative observational error (ξ = 1) there is a corresponding change in the error variance, keeping a constant signal to noise ratio.If ξ < 1, increasing n decreases the signal to noise ratio of the data.
Adding additional data points in this way increases the accuracy of parameter estimates, with standard errors eventually decreasing as n −1/2 (Figure 4, in which prevalence data is used), in accordance with the asymptotic theory [39].This is still the case for incidence data even when ξ < 1 where the signal to noise ratio decreases in n.We point out that changing the sampling frequency will typically not be an option in epidemiological settings because data will be collected at some fixed frequency, such as once each day or week, although, conceivably, a weekly sampling frequency could be replaced by daily sampling.as the number of observations, n, changes while maintaining a constant window of observation (fixed t end ).Apart from the smallest few values of n, the points fall on a line of slope − 1 2 on this log-log plot.Standard errors are calculated using Equation ( 8), using the true values of the parameters.The variance-covariance matrix Σ 0 was calculated exactly (i.e., no curve-fitting was carried out) with the disease prevalence under a Poisson noise structure, ξ = 1/2, with σ 2 0 = 1.Parameter values and initial conditions used were β = 3, γ = 1, N = 10, 000, S 0 = 9900, and I 0 = 100.
For real-time outbreak analysis, the amount of available data will increase over time as the epidemic unfolds.Consequently, it is of practical importance to understand how much data-and hence observation time-is required to obtain reliable estimates and the extent to which estimates will improve with additional data points.Using Equation ( 8) and the known values of the parameters, we calculated standard errors for parameter estimates based on the first n used data points, where p + 1 ≤ n used ≤ n.As seen in Figures 5a and 5b, when only one parameter is fitted, the standard error decreases rapidly at first, but its decrease slows significantly just before the peak of the epidemic.Once this point in time has been reached, subsequent data points provide considerably less additional information than did earlier data points.In this setting, the most important time interval extends from the initial infection to just before the peak of the outbreak.However, when both β and γ are fitted, the interval of steep descent extends slightly beyond the peak of the epidemic, as seen in Figure 6a.This indicates that it would be useful to collect data over a longer interval in this case.Notice the log scale on the vertical axis for each of the aforementioned plots.These figures suggest that the amount of .Impact of increasing the length of the observation window on standard errors of estimates of (a) β and (b) γ when each is estimated separately from prevalence data.The observation window is [0, t n used ], i.e., estimation was carried out using n used data points.Because data points are equally spaced, the horizontal axis depicts both the number of data points used and time since the start of the outbreak.For reference, the prevalence curve, I(t), is shown in the lower panel of each graph.Standard errors are plotted on a logarithmic scale.The exact formula for Σ 0 was used, with σ 2 0 = 1, S 0 = 9900, I 0 = 100, N = 10, 000, β = 3 and γ = 1.The Poisson noise structure, ξ = 1/2, was employed.information contained in the earliest portion of an outbreak is orders of magnitude higher than that contained in later portions.
Figure 6b shows the correlation coefficient between estimates of β and γ as the epidemic progresses.It can be seen that estimates of β and γ are highly correlated until the first inflection point of the epidemic curve, causing the significantly higher standard errors as seen in Figure 6a.This behavior is not unexpected due to the two sensitivity curves for prevalence being near mirror images early in the outbreak, during the exponential growth phase.
Our final sampling method investigated the impact of removing a single data point as a means of identifying the data points which provide the most information for the estimation of the parameters.A baseline data set consisting of fifty evenlyspaced points taken over the course of the outbreak was generated using absolute noise (ξ = 0).Fifty reduced data sets were created by removing, in turn, a single data point from the baseline data set.Standard errors were then computed for the reduced data sets using the true covariance matrix Σ 0 (Equation ( 8)).(For this experiment, use of the true covariance matrix allowed us to accurately observe the small effects on standard errors that resulted from the removal of single data points.Errors introduced by solving the inverse problem would have obscured the patterns we observed.)The largest standard error values in this group of data sets correspond to the most informative data points since the removal of such points leads to the largest increase in uncertainty of the estimate.As Figure 7 shows, when β is the only parameter fitted and ordinary least squares estimation is used, the local maxima of the standard error curve occur at the same times as the local extrema of the sensitivity curve, and the local minima occur when the sensitivity is close to zero.In this case, the sensitivity function correctly identifies subintervals in which data are most or least informative about β.
The picture is not quite as straightforward when β and γ are estimated simultaneously using ordinary least squares.Figure 8 shows that the local maxima of the standard error curves no longer line up directly with the local extrema of the sensitivity curves (this effect is more easily seen in Figure 8b).This is likely due Standard errors for the estimation of β from prevalence data using the single point removal method as discussed in the text (solid curve) with the baseline standard error (without removing any points) also plotted (horizontal dashed line).Standard errors were calculated using Equation ( 8) and each is plotted at the time t i corresponding to the removed data point.For comparison, the sensitivity of I(t) with respect to β is also shown (dotted curve).Synthetic data was generated using the parameter values σ 2 0 = 10 4 , S 0 = 9900, I 0 = 100, N = 10, 000, β = 3 and γ = 1.The additive noise structure, ξ = 0 was assumed.
to the correlation between the estimates of β and γ: the off-diagonal terms of χ T (θ)W (θ)χ(θ) involve products of sensitivities with respect to the two different parameters.As a consequence, it is no longer sufficient to examine individual sensitivity curves, but, as we have seen, the selective reductive method described here, based on the asymptotic theory, can identify when additional data should ideally be sampled.
Similarly, having a weight matrix other than the identity (i.e., when GLS, rather than OLS, is to be used) leads to the sensitivity curves misidentifying the subintervals in which data are most or least informative for parameter estimation (results not shown; see [9]).This occurs whether single or multiple parameters are estimated, and happens because the sensitivity curves do not, by themselves, account for the relative importance placed on different data points.Again, the selective reduction method accounts for this effect and correctly identifies time intervals when additional data would be most informative.

7.
Results: Parameter identifiability.Until now, we have only considered the introduction of an infection into a virgin population, assuming a known initial number of infectives in an otherwise susceptible population.For an endemic infection, such as seasonal flu, only a fraction of the population would be susceptible at the start of an outbreak.In such instances, the general reproductive number, R t , the average number of secondary infections at any point in time, is a more relevant quantity than R 0 .For the SIR model, R t is given by In the virgin population considered above, we saw that as R 0 approached one there was considerable difficulty in independently estimating a pair of parameters.In the endemic setting, this phenomenon occurs as R t approaches one, so the parameter identifiability issue can arise even if R 0 is significantly greater than one.
In the endemic setting, we would be unlikely to know the initial numbers of infectives and susceptibles, so we would also need to estimate the values of S 0 and I 0 .Given the difficulty in estimating a pair of parameters that has already been illustrated, it seems reasonable to expect that parameter identfiability would become a more delicate issue if larger sets of parameters were estimated.In this section we shall explore the identifiability of parameters when combinations of β, γ, S 0 and I 0 are estimated.This method is generally referred to as subset selection and has been explored by in the context of identifiability by a number of authors (for example, [7,8,15,28]).
It has been shown by Evans et al. in [22] that the SIR model with demography is identifiable for all model parameters and initial conditions.They use a strict definition of non-identifiability, where in such a model, a change in one parameter can be compensated by changes in other parameters.However, the authors also concede that while the model may be identifiable, that property alone does not give insight into the ease of estimation of certain subsets of parameters.For example, by their definition, two parameters whose estimates have a correlation coefficient of 0.99 would be identifiable, yet they may not be easily estimated.In this section, we use quantitative methods to assess ease of parameter identifiability in the context of subset selection.
It was stated above that the asymptotic statistical theory requires the limiting matrix Ω 0 to be invertible.With a finite-sized sample, we instead require this of Ω (n) 0 .Non-identifiability leads to these matrices being singular, or close to singular [8], and so one method for determining whether model parameters are identifiable involves calculating the condition number of Ω (n) 0 , or, equivalently the condition number of the matrix Σ (n) [15].The condition number, κ(X), of a nonsingular matrix X is defined to be the product of the norm of X and the norm of X −1 .If we take the norm to be the usual induced matrix 2-norm, we have that the condition number of X is the ratio of the largest singular value (from a singular value decomposition) of X to the smallest singular value of X [34].
Initially, we investigate the case where only β and γ are fitted.In this situation, we are able to find an expression for κ(Σ) If the standard errors were fixed, Equation 22shows that as the correlation between estimates of β and γ approaches one, the condition number goes to infinity.However, in reality standard errors do depend on the values of β and γ; Figure 9 provides a more complete picture of how the condition number changes over a range of R 0 values.As the figure shows, it is more difficult to rely on estimates of β and γ when R 0 approaches one, corroborating what we have previously seen for the correlation coefficient (see Figure 1a).Numerical experiments indicate that when more parameters are fitted to the data, identifiability becomes a more serious issue.In such a case, while we can no longer give a simple expression for κ(Σ 0 ) since it is a function of the parameters, the initial conditions and even the data, it provides insight into parameter identifiability.We examine κ(Σ 0 ) across different subsets of fitted parameters as seen in Table 2.As we increase the number of parameters fitted, the condition number can increase by multiple orders of magnitude.This is evident whenever we fit both β and S 0 .Notice that for the larger κ values, the magnitude of ρ is very near to one, indicating strong correlation.Thus, we can surmise that as we increase the number of fitted parameters, our ability to identify individual parameters decreases, especially if the parameters added to θ have correlated estimates.
In this example, if we assume the initial conditions are known, our ability to estimate β and γ is good.Yet, once we have to estimate one or both initial coniditions, our ability to estimate either β or γ worsens considerably.Given that in most situations initial conditions are not known exactly, parameter identifiability has the potential to be of widespread concern.accompany estimates of parameter values.The asymptotic statistical theory employed here provides a reasonably straightforward way to obtain such information when least-squares fitting is used as the estimation process.The use of a number of synthetic data sets, generated under a number of different scenarios concerning the transmissibility of infection, has allowed us to get a broader understanding of the parameter estimation process than would have been possible if we had limited attention to a single data set.As we have demonstrated, the uncertainties that accompany parameter estimation, and even our ability to separately identify parameters-even with this simplest of SIR models-can be extremely varied based on the underlying parameter values and the parameter set being fitted.A primary reason for difficulties in estimation and identifiability stems from correlations between parameter estimates.Even if individual parameter estimates have large uncertainties it can still be possible to estimate epidemiologically important information, e.g., the basic reproductive number R 0 , with much less uncertainty.
Increasing the number of observations made at critical times during the epidemic can provide a substantial gain in the precision of the estimation process.While the sensitivity equations of the model provide a general idea of times at which additional data will be most informative, they do not tell the whole story.The asymptotic statistical theory, together with the data point removal technique, can be used to guide data collection.This approach can be employed once a parameter set is known: this might be one based on a preliminary set of estimates, expert opinon, or even a best-guess.Some aspects of our discussion do, however, require more detailed information on the magnitude and nature of the noise in the data.
We have focused on identifiability in the least squares context, but one cannot escape a lack of parameter identifiability simply by using a different method of parameter estimation.Bayesian methods, including Markov Chain Monte Carlo, (see, for example, [11] and [31]), provide an alternative suite of approaches that are commonly used to solve the inverse problem.Yet, since identifiability is primarily a feature of the mathematical model and less dependent on the fitting process, switching estimation techniques often does not remove the problem of parameter identifiability, so it remains an important concern when solving the inverse problem in any respect.
It should be noted that all experiments presented here were conducted with knowledge of the underlying model, that is, the correct model was fit to the data.However, in scenarios with real data this assumption is not valid and results in a further layer of uncertainty.This type of structural uncertainty has received far less attention but in some circumstances it can dwarf uncertainty due to noise in the data.As an example, a number of authors have shown that estimates of the basic reproductive number obtained by fitting models to data on the initial growth of an outbreak can be highly sensitive to model assumptions [32,33,35,42].
We chose to focus our attention on perhaps the simplest possible setting for the estimation process, one for which the SIR model was appropriate.Unfortunately, few real-world disease transmission processes are quite this simple; in most instances, a more complex epidemiological model, accompanied by a larger set of parameters and initial conditions, would be more realistic.It is not hard to imagine that many of the issues discussed here would be much more delicate in such situations: parameter identifiability, in particular, could be a major concern.The approach employed here would reveal whether such problems would accompany estimation using a given model, and indeed can be used to guide the selection of models and/or parameter sets that can be used or estimated reliably.Again, this emphasizes the need for the estimation process to be accompanied by some account of the uncertainties, but not only in terms of uncertainties of individual estimates but also of correlation between estimates.

Figure 1 .
Figure 1.Dependence of the correlation coefficient and standard errors for estimates of β and γ on the value of R 0 .Panel (a) displays the correlation coefficient, ρ, between estimates of β and γ for a range of R 0 values.Panel (b) shows, on a log scale, standard errors for estimates of β (solid curve) and γ (dashed curve).The variancecovariance matrix Σ 0 was calculated exactly (i.e., no curve-fitting was carried out) under a Poisson noise structure, ξ = 1/2, with σ 2 0 = 1, and n = 250 data points.Parameter values and initial conditions used were β = R 0 , γ = 1, N = 10, 000, S 0 = 9900, and I 0 = 100.

Figure 3 .
Figure 3. Sensitivities of I(t) (i.e., prevalence) with respect to the model parameters β (solid curves) and γ (dashed curves) are shown on the upper panels of the graphs for a) R 0 = 1.2, b) R 0 = 3 and c) R 0 = 10.The lower panel of each graph displays the corresponding prevalence-time curve.The initial conditions of the SIR model were S 0 = 9900, I 0 = 100, with N = 10, 000 and γ was taken equal to one, so β = R 0 .

Figure 4 .
Figure 4. Standard errors of β (solid curve) and γ (dashed curve) as the number of observations, n, changes while maintaining a constant window of observation (fixed t end ).Apart from the smallest few values of n, the points fall on a line of slope − 1 2 on this log-log plot.Standard errors are calculated using Equation (8), using the true values of the parameters.The variance-covariance matrix Σ 0 was calculated exactly (i.e., no curve-fitting was carried out) with the disease prevalence under a Poisson noise structure, ξ = 1/2, with σ 2 0 = 1.Parameter values and initial conditions used were β = 3, γ = 1, N = 10, 000, S 0 = 9900, and I 0 = 100.

Figure 5
Figure 5. Impact of increasing the length of the observation window on standard errors of estimates of (a) β and (b) γ when each is estimated separately from prevalence data.The observation window is [0, t n used ], i.e., estimation was carried out using n used data points.Because data points are equally spaced, the horizontal axis depicts both the number of data points used and time since the start of the outbreak.For reference, the prevalence curve, I(t), is shown in the lower panel of each graph.Standard errors are plotted on a logarithmic scale.The exact formula for Σ 0 was used, with σ 2 0 = 1, S 0 = 9900, I 0 = 100, N = 10, 000, β = 3 and γ = 1.The Poisson noise structure, ξ = 1/2, was employed.

Figure 6 .
Figure 6.Illustrated in graph (a) is the impact of increasing the length of the observation window on standard errors of estimates of β (solid curve) and γ (dashed curve) when both are estimated simultaneously.Graph (b) displays the effect on the correlation coefficient between estimates of β and γ.The observation window consists of n used data points in the time interval [t 0 , t n used ].For reference, the prevalence curve, I(t), is shown on the lower panels.All parameter values and other details are as in the previous figure.

Figure 8 .
Figure 8.Standard errors for the simultaneous estimation of β and γ from prevalence data using the single point removal method as discussed in the text (solid curves).Standard errors were calculated using Equation (8), and each is plotted at the time t i of the removed data point.Panel (a) shows the standard error for the estimate of β (solid curve), together with the baseline (i.e., without removing any points) standard error (horizontal dashed line) and the sensitivity of I(t) with respect to β (dotted curve).Panel (b) shows the standard error for the estimate of γ (solid curve), together with the baseline standard error (horizontal dashed line) and the sensitivity of I(t) with respect to γ (dotted curve).All parameter values and other details are as in the previous figure.