The Estimation of the Effective Reproductive Number from Disease Outbreak Data

We consider a single outbreak susceptible-infected-recovered (SIR) model and corresponding estimation procedures for the effective reproductive number $\mathcal{R}(t)$. We discuss the estimation of the underlying SIR parameters with a generalized least squares (GLS) estimation technique. We do this in the context of appropriate statistical models for the measurement process. We use asymptotic statistical theories to derive the mean and variance of the limiting (Gaussian) sampling distribution and to perform post statistical analysis of the inverse problems. We illustrate the ideas and pitfalls (e.g., large condition numbers on the corresponding Fisher information matrix) with both synthetic and influenza incidence data sets.


Introduction
The transmissibility of an infection can be quantified by its basic reproductive number, R 0 , defined as the mean number of secondary infections seeded by a typical infective into a completely susceptible (naive) host population [1,14,20]. For many simple epidemic processes, this parameter determines a threshold: whenever R 0 > 1, a typical infective gives rise to more than one secondary infection, leading to an epidemic. In contrast, when R 0 < 1, infectives typically give rise to less than one secondary infection and the prevalence of infection cannot increase.
Due to the natural history of some infections, transmissibility is better quantified by the effective-rather than the basic-reproductive number. For instance, exposure to influenza in previous years confers some cross-immunity [11,17,25]; the strength of this protection depends on the antigenic similarity between the current year's strain of influenza and earlier ones.
Consequently, the population is non-naive and so it is more appropriate to consider the effective reproductive number, R(t), a time-dependent quantity that accounts for the population's reduced susceptibility.
Our goal is to develop a methodology for the estimation of R(t) that also provides a measure of the uncertainty in the estimates. We apply the proposed methodology in the context of annual influenza outbreaks, focusing on data for influenza A (H3N2) viruses, which were, with the exception of the influenza seasons 2000-1 and 2002-3, the dominant flu subtype in the US over the period from 1997 to 2005 [9,29].
The estimation of reproductive numbers is typically an indirect process because some of the parameters on which these numbers depend are difficult, if not impossible, to quantify directly.
A commonly used indirect approach involves fitting a model to some epidemiological data, providing estimates of the required parameters.
In this study we estimate the effective reproductive number by fitting a deterministic epidemiological model employing either an Ordinary Least-Squares (OLS) or a Generalized Least-Squares (GLS) estimation scheme to obtain estimates of model parameters. Statistical asymptotic theory [13,27] and sensitivity analysis [12,26] are then applied to give approximate sampling distributions for the estimated parameters. Uncertainty in the estimates of R(t) is then quantified by drawing parameters from these sampling distributions, simulating the corresponding deterministic model and then calculating effective reproductive numbers. In this way, the sampling distribution of the effective reproductive number is constructed at any desired time point.
The statistical methodology provides a framework within which the adequacy of the parameter estimates can be formally assessed for a given data set. We shall present instances in which the fitted model appears to provide an adequate fit to a given data set but where the statistics reveal that the parameter estimates have very high levels of uncertainty. We also discuss situations in which the fitted model appears, at least visually, to provide an adequate fit and where the statistics suggest that the uncertainty in the parameters is not so large but that, in reality, a poor fit has been achieved. We discuss the use of residuals plots as a diagnostic for this outcome, which highlights the problems that arise when the assumptions of the statistical model underlying the estimation framework are violated.
This manuscript is organized as follows: In Section 2 the data sets are introduced. A singleoutbreak deterministic model is introduced in Section 3. Section 4 introduces the least squares estimation methodology used to estimate values for the parameters and quantify the uncertainty in these estimates. Our methodology for obtaining estimates of R(t) and its uncertainty is also described. Use of these schemes is illustrated in Section 5, in which they are applied to synthetic data sets. Section 6 applies the estimation machinery to the influenza incidence data sets. We conclude with a discussion of the methodologies and their application to the data sets.

Longitudinal Incidence Data
Influenza is one of the most significant infectious diseases of humans, as witnessed by the 1918 "Spanish Flu" pandemic, during which 20 to 40 percent of the worldwide population became infected. At least 50 million deaths resulted, with 675,000 of these occurring in the US [30]. The impact of flu is still significant during inter-pandemic periods: the Centers for Disease Control and Prevention (CDC) estimate that between 5 and 20 percent of the US population becomes infected annually [9]. These annual flu outbreaks lead to an average of 200,000 hospitalizations (mostly involving young children and the elderly) and mortality that ranges between about 900 and 13,000 deaths per year [29].
The Influenza Division of the CDC reports weekly information on influenza activity in the US from calendar week 40 in October through week 20 in May [9], the period referred to as the influenza season. Because the influenza virus exhibits a high degree of genetic variability, data is not only collected on the number of cases but also on the types of influenza viruses that are circulating. A sample of viruses isolated from patients undergoes antigenic characterization, with the type, subtype and, in some instances, the strain of the virus being reported [9].
The CDC acknowledges that, while these reports may help in mapping influenza activity (whether or not it is increasing or decreasing) throughout the US, they do not provide enough information to calculate how many people became ill with influenza during a given season. The CDC's caution likely reflects the uncertainty associated with the sampling process that gives rise to the tested isolates, in particular that this process is not sufficiently standardized across space and time. We return to discuss this point later in this paper.
Despite the cautionary remarks by the CDC we use these isolate reports as illustrative data sets to which we can apply our proposed estimation methodologies. Interpretation of the results, however, should be mindful of the issues associated with the data. The total number of tested specimens and isolates through various seasons are summarized in Table 1. It is observed that H3N2 viruses predominated in most seasons with the exception of 2000-1 and 2002-3.
Consequently, we focus our attention on the H3N2 subtype. Figure 1 depicts the number of H3N2 isolates reported over the 1999-2000 influenza season.

Deterministic Single-Outbreak SIR Model
The model that we use is the standard Susceptible-Infective-Recovered (SIR) model (see, for example, [1,6]). The state variables S(t), I(t), and R(t) denote the number of people who are susceptible, infective, and recovered, respectively, at time t. It is assumed that newly infected individuals immediately become infectious and that recovered individuals acquire permanent immunity. The influenza season, lasting nearly 32 weeks [9], is short compared to the average lifespan, so we ignore demographic processes (births and deaths) as well as disease-induced fatalities and assume that the total population size remains constant. The model is given by the following set of nonlinear differential equations Here, β is the transmission parameter and γ is the (per-capita) rate of recovery, the reciprocal of which gives the average duration of infection. Notice that one of the differential equations is redundant because the three compartments sum to the constant population size: S(t) + I(t) + R(t) = N . We choose to track S(t) and I(t). The initial conditions of these state variables are denoted by S(t 0 ) = S 0 and I(t 0 ) = I 0 .
The equation for the infective population (2) can be rewritten as where R(t) = S(t) N R 0 and R 0 = β/γ. R(t) is known as the effective reproductive number, while R 0 is known as the basic reproductive number. We have that R(t) ≤ R 0 , with the upper bound-the basic reproductive number-only being achieved when the entire population is susceptible.
We note that R(t) is the product of the per-infective rate at which new infections arise and the average duration of infection, and so the effective reproductive number gives the average number of secondary infections caused by a single infective, at a given susceptible fraction. The prevalence of infection increases or decreases according to whether R(t) is greater than or less than one, respectively. Because there is no replenishment of the susceptible pool in this SIR model, R(t) decreases over the course of an outbreak as susceptible individuals become infected.

Estimation Schemes
In order to calculate R(t), it is necessary to know the two epidemiological parameters β and γ, as well as the number of susceptibles, S(t), and the population size, N . As mentioned before, difficulties in the direct estimation of β-whose value reflects the rate at which contacts occur in the population and the probability of transmission occurring when a susceptible and infective meet-and direct estimation of S(t) preclude direct estimation of R(t). As a result, we adopt an indirect approach, which proceeds by first finding the parameter set for which the model has the best agreement with the data and then calculating R(t) by using these parameters and the model-predicted time course of S(t). Simulation of the model also requires knowledge of the initial values, S 0 and I 0 , which must also be estimated.
Although the model is framed in terms of the prevalence of infection I(t), the time-series data provides information on the weekly incidence of infection, which, in terms of the model, is given by the integral of the rate at which new infections arise over the week: βS(t)I(t)/N dt.
We notice that the parameters β and N only appear (both in the model and in the expression for incidence) as the ratio β/N , precluding their separate estimation. Consequently we need only estimate the value of this ratio, which we callβ = β/N .
We employ inverse problem methodology to obtain estimates of the vector θ = (S 0 , I 0 ,β, γ) ∈ R p = R 4 by minimizing the difference between the model predictions and the observed data, according to two related but distinct least squares criteria, ordinary least squares (OLS) and generalized least squares (GLS). In what follows, we refer to θ as the parameter vector, or simply the parameter, in the inverse problem, even though some of its components are initial conditions, rather than parameters, of the underlying dynamic model.

Ordinary Least Squares (OLS) Estimation
The least squares estimation methodology is based on the mathematical model as well as a statistical model for the observation process (referred to as the case counting process). It is assumed that our known model, together with a particular choice of parameters-the "true" parameter vector, written as θ 0 -exactly describes the epidemic process, but that the n observations, Y j , are affected by random deviations (e.g., measurement errors) from this underlying process. More precisely, it is assumed that where z(t j ; θ 0 ) denotes the weekly incidence given by the model under the true parameter, θ 0 , and is defined by the following integral: Here, t 0 denotes the time at which the epidemic observation process started and the weekly observation time points are written as t 1 < . . . < t n .
The errors, j , are assumed to be independent and identically distributed (i.i.d.) random variables with zero mean (E[ j ] = 0), representing measurement error as well as other phenomena that cause the observations to deviate from the model predictions z(t j ; θ 0 ). The i.i.d.
assumption means that the errors are uncorrelated across time and that each has identical variance, which we write as var( j ) = σ 2 0 . It is assumed that σ 2 0 is finite. We make no further assumptions about the distribution of the errors: in particular, we do not assume that they are normally distributed. It is immediately clear that we have E[Y j ] = z(t j ; θ 0 ) and var(Y j ) = σ 2 0 : in particular, this variance is longitudinally constant (i.e, across the time points).
For a given set of observations Y = (Y 1 , . . . , Y n ), we define the estimator θ OLS as follows: Here Θ represents the feasible region for the parameter values. (We discuss this region in more detail later.) This estimator is a random variable (note that j = Y j − z(t j ; θ 0 ) is a random variable) that involves minimizing the distance between the data and the model prediction. We note that all of the observations are treated as having equal importance in the OLS formulation.
If {y j } n j=1 is a realization of the case counting (random) process {Y j } n j=1 , we define the cost function by (8) and observe that the solution ofθ provides a realization of the random variable θ OLS .
The optimization problem in Equation (9) can, in principle, be solved by a wide variety of algorithms. The results discussed in this paper were obtained by using a direct search method, the Nelder-Mead simplex algorithm, as discussed by [22], employing the implementation provided by the MATLAB (The Mathworks, Inc.) routine fminsearch.
Because var( j ) = E( 2 j ) = σ 2 0 , the true variance satisfies Because we do not know θ 0 , we base our estimate of the error variance on an equation of this form, but instead of using θ 0 we use the estimated parameter vector,θ OLS . The right side of Equation (10) is then equal to J(θ OLS )/n. This estimate, however, is biased and so instead the following bias-adjusted estimate is used Here the n − 4 arises because p = 4 parameters have been estimated from the data.
Even though the distribution of the errors is not specified, asymptotic theory can be used to describe the distribution of the random variable θ OLS [3,27]. Provided that a number of regularity conditions as well as sampling conditions are met (see [27] for details), it can be shown that, asymptotically (i.e., as n → ∞), θ OLS is distributed according to the following multivariate normal distribution: where Σ n 0 = n −1 σ 2 0 Ω −1 0 and Ω 0 = lim n→∞ 1 n χ(θ 0 , n) T χ(θ 0 , n).
We remark that the theory requires that this limit exists and that the matrix Ω 0 be non-singular.
The matrix Σ n 0 is the 4 × 4 covariance matrix, whose entries equal cov ((θ OLS ) i , (θ OLS ) j ), and the n × 4 matrix χ(θ 0 , n) is the sensitivity matrix of the system, as defined and discussed below.
In general, θ 0 , σ 2 0 , and Σ n 0 are unknown, so these quantities are approximated by the estimateŝ θ OLS andσ 2 OLS , and the following matrix Consequently, for large n, we have approximately that We obtain the standard error for the i-th element ofθ OLS by calculating The n × 4 matrices χ(θ, n) that appear in the above formulae are called sensitivity matrices and are defined by The sensitivity matrix denotes the variation of the model output with respect to the parameter, and, for this model-based dynamical system, can be obtained using standard theory [2,12,16,19,21,26].
The entries of the j-th row of χ(θ, n) denote how the weekly incidence at time t j changes in response to changes in the parameter (i.e., in either S 0 , I 0 ,β, or γ) and these can be calculated We see that these expressions involve the partial derivatives of the state variables, S(t; θ) and I(t; θ), with respect to the parameters. Analytic forms of the sensitivities are not available because the state variables are the solutions of a nonlinear system; instead, they are calculated numerically.
In order to outline how these numerical sensitivities may be found, we introduce the notation x(t; θ) = (S(t; θ), I(t; θ)) and denote by g = (g 1 , g 2 ) the vector function whose entries are given by the expressions on the right sides of Equations (1) and (2). Then we can write the singleoutbreak SIR model in the general vector form x(0; θ) = (θ 1 , θ 2 ).
Because the function g is differentiable (in both t and θ), taking the partial derivatives ∂/∂θ of both sides of Equation (21) we obtain the differential equation Here ∂g/∂x is a 2-by-2 matrix, ∂g/∂θ is a 2-by-4 matrix, and ∂x/∂θ is the 2-by-4 matrix Numerical values of the sensitivities are calculated by solving (21) and (23) simultaneously.
We define φ(t) = ∂x ∂θ (t; θ), let the parameter be evaluated at the estimate, θ =θ, and solve the following differential equations from t = 0 to t = t n d dt

Generalized Least Squares (GLS) Estimation
The errors in the statistical model defined by Equation (5) were assumed to have constant variance, which may not be an appropriate assumption for all data sets. One alternative statistical model that can account for more complex error structure in the case counting process is the As before, it is assumed the j are i.i.d. random variables with E( j ) = 0 and var( j ) = σ 2 0 < ∞, but no further assumptions are made. Under these assumptions, the observation mean is again equal to the model prediction, E[Y j ] = z(t j ; θ 0 ), while the variance in the observations is a function of the time point, with var(Y j ) = σ 2 0 z 2 (t j ; θ 0 ). In particular, this variance is nonconstant and model-dependent. One situation in which this error structure may be appropriate is when observation errors scale with the size of the measurement (so-called relative noise).

Given a set of observations
where the w j are a set of non-negative weights [13]. Unlike the ordinary least squares for-mulation, this definition assigns different levels of influence, described by the weights, to the different observations in the cost function. For the error structure described above in Equation (29), the weights are taken to be inversely proportional to the square of the predicted incidence: We shall also consider weights taken to be proportional to the reciprocal of the predicted incidence; these correspond to assuming that the variance in the observations is proportional to the value of the model (as opposed to its square).
Suppose {y j } n j=1 is a realization of the case counting process {Y j } n j=1 and define the function L(θ) as The quantity θ GLS is a random variable and a realization of it, denoted byθ GLS , is obtained by solving In the limit as n → ∞, the GLS estimator θ GLS has the following asymptotic properties [3,13]: Here The sensitivity matrix χ(θ 0 , n) is as defined in Section 4.1.
Because θ 0 and σ 2 0 are unknown, the estimateθ GLS is used to calculate approximations of σ 2 0 and the covariance matrix Σ n 0 by As before, the standard errors forθ GLS can be approximated by taking the square roots of the diagonal elements of the covariance matrixΣ n GLS . The cost function used in GLS estimation involves weights whose values depend on the values of the fitted model. These values are not known before carrying out the estimation procedure and consequently GLS estimation is implemented as an iterative process. An OLS is first performed on the data, and the resulting model values provide an initial set of weights.
A weighted least squares fit is then performed using these weights, obtaining updated model values and hence an updated set of weights. The weighted least squares process is repeated until some convergence criterion is satisfied, such as successive values of the estimates being deemed to be sufficiently close to each other. The process can be summarized as follows 1. Estimateθ GLS byθ (0) using the OLS Equation (9). Set k = 0; to obtain the k + 1 estimateθ (k+1) forθ GLS ; 4. Set k = k + 1 and return to 2. Terminate the procedure when successive estimates for θ GLS are sufficiently close to each other.
The convergence of this procedure is discussed in [7,13].

Estimation of the Effective Reproductive Number
Let the pair (θ,Σ) denote the parameter estimate and covariance matrix obtained with either the OLS or GLS methodology from a given realization {y j } n j=1 of the case counting process. Simulation of the SIR model then allows the time course of the susceptible population, S(t;θ), to be generated. The time course of the effective reproductive number can then be calculated as R(t;θ) = S(t;θ)β/γ. This trajectory is our central estimate of R(t).
The uncertainty in the resulting estimate of R(t) can be assessed by repeated sampling of parameter vectors from the corresponding sampling distribution obtained from the asymptotic theory, and applying the above methodology to calculate the R(t) trajectory that results each time. To generate m such sample trajectories, we sample m parameter vectors, θ (k) , from the 4-multivariate normal distribution N 4 (θ,Σ). We require that each θ (k) lie within our feasible region, Θ. If this is not the case, then we resample until θ (k) ∈ Θ. Numerical solution of the SIR model using θ (k) allows the sample trajectory R(t; θ (k) ) to be calculated. Below, we summarize these steps involved in the construction of the sampling distribution of the effective reproductive number: 1. Set k = 1; 2. Obtain the k-th parameter sample from the 4-multivariate normal distribution: ∈ Θ (constraints are not satisfied) return to 2. Otherwise go to 4; 4. Using θ = θ (k) find numerical solutions, denoted by S(t; θ (k) ), I(t; θ (k) ) , to the nonlinear system defined by Equations (1) and (2). Construct the effective reproductive number as follows: 5. Set k = k + 1. If k > m then terminate, otherwise return to 2.
Uncertainty estimates for R(t) are calculated by finding appropriate percentiles of the distribution of the R(t) samples.

Synthetic Data with Constant Variance Noise
We illustrate the OLS methodology and investigate its performance using synthetic data. A true parameter θ 0 is chosen and a set of synthetic data is constructed by adding random noise to the model prediction of incidence (for every time point t j ) in the following manner: Here, U j is a standard normal random variable (U j ∼ N (0, 1)) and the constant c is the product of a pre-selected value, α, and the minimum value of the simulated incidence: The multiplier α allows us to control the variance of the noise, while the use of the minimum incidence is an attempt to reduce the occurrence of negative values in the synthetic data set. It is clear from Equation (37) that the noise added to the synthetic data has constant variance, given by var(cU j ) = c 2 . A realization of the case counting process is denoted by {y j } n j=1 with y j = z(t j ; θ 0 ) + cu j , where all the u j 's are independently drawn from a standard normal distribution.
The optimization routine requires an initial estimated value of the parameter; this is taken to be θ = (1 + a)θ 0 , where a also denotes a pre-selected multiplier. Selecting different values for α and a allows us to investigate the performance of the estimation process in the face of different levels of noise and differing levels of information as to the approximate location of the best fitting model parameter (i.e., the "true" parameter).
A synthetic data set with n = 1, 000 observations was constructed by setting α = 0.50. The initial guess was set using a = 0.25. Then m = 10, 000 sample trajectories of R(t) were generated using the procedure discussed above. The resulting estimates of the parameters and effective reproductive numbers, together with measures of uncertainty, are given in Table 2. Also listed are the initial parameter guesses given to the optimization routine and the minimized value of the cost function, J(θ OLS ). Figure 2(a) depicts the synthetic data (squares), together with the best fitting model (solid curve). We remark that the observation noise, which is on the order of α = 0.50 times the smallest incidence value, represents a very small error over the major part of the synthetic data set. As such, it is almost impossible to distinguish between the data and fitted model in this figure.
The trajectories of the effective reproductive number are shown as grey solid curves in Panels  A simple residuals analysis is illustrated in Figure 4. A residual at time t j is defined as y j − z(t j ;θ OLS ). In Figure 4(a), these residuals are plotted against the predicted values,   In this example, the OLS methodology performs well, yielding excellent estimates of the true parameter value. This should not be surprising because the noise level was chosen to be extremely small and we provided the optimization routine with an initial parameter value that was close to the true value.
The application of the OLS methodology does not always go so smoothly as in the previous example: parameter estimates can be obtained that are far from the true values. The cost function, J(θ), typically possesses multiple minima and the simple-minded use of fminsearch can yield a parameter estimate located at one of the other (local) minima, particularly when the initial parameter estimate is some distance away from the true value. Table 3 presents estimates for the same synthetic data set, but for which the initial parameter estimate was taken to be one hundred and seventy five percent (θ = 2.  Figure 6).
Because we know the true parameter value and the outcome of a successful model fit to this data set, it was easy for us to identify the problems that arose here. We note, for instance, that the value of the cost function for the estimated parameter value is two orders of magnitude larger than in the previous case, quantifying that the model fit is much worse. We would not have the luxury of these pieces of information if this estimation arose in the consideration of a real-world data set. The residuals plots, however, clearly suggest that there are serious problems with the model fit. In particular, there are obvious temporal trends in the residuals, indicating systematic deviations between the fitted model and data. Even though the observation noise is    Figure 6: One thousand of the m = 10, 000 effective reproductive number curves (solid gray) constructed using parameters drawn from the 4-multivariate normal distribution N 4 (θ OLS ,Σ n OLS ). The curve R(t;θ OLS ) is shown in solid black. The curve of the median value of the R(t) samples, at each t, is also shown as a dashed black curve, but is indistinguishable from the curve R(t;θ OLS ).

Synthetic Data with Non-constant Variance Noise
We generated a second synthetic data set with non-constant variance noise. The true value θ 0 was fixed, and was used to calculate the numerical solution z(t j ; θ 0 ). Observations were computed in the following fashion: where the V j are independent random variables with standard normal distribution, and 0 < α < 1 denotes a desired percentage. In this way, var(Y j ) = [z(t j ; θ 0 )α] 2 which is non-constant across the time points t j . If the terms {v j } n j=1 denote a realization of {V j } n j=1 , then a realization of the observation process is denoted by y j = z(t j ; θ 0 )(1 + αv j ).
An n = 1, 000 point synthetic data set was constructed with α = 0.075. The optimization algorithm was initialized with the estimate θ = 1.10θ 0 . The weights in the normal equations defined by Equation (30), were chosen as w j = 1/z(t j ; θ) 2 . Table 4 lists estimates of the parameters and R(t), together with uncertainty estimates. In the case of R(t), uncertainty was assessed based on the simulation approach using m = 10, 000 samples of the parameter vector, drawn from N 4 (θ GLS ,Σ n GLS ). Figure 7(a) depicts both data and fitted model points, z(t j ;θ GLS ), plotted versus t j . Figure 7(b) depicts 1, 000 of the 10, 000 R(t) curves. Table 4: Estimates from a synthetic data of size n = 1, 000, with non-constant variance using α = 0.075. The R(t) sample size is m = 10, 000. The initial guess of the optimization algorithm was θ = 1.10θ 0 . Each weight in the cost function L(θ) (see Equation (31)) was equal to 1/z(t j ; θ) 2 for j = 1, . . . , n.

Analysis of Influenza Outbreak Data
The OLS and GLS methodologies were applied to longitudinal observations of six influenza outbreaks (see Section 2), giving estimates of the parameters and the reproductive number for each season. The number of observations n varies from season to season. The R(t) sample size was m = 10, 000 in each case. The set of admissible parameters Θ is defined by the lower and upper bounds listed in Table 5 along with the inequality constraint S 0β /γ > 1. The bounds in Table 5 were obtained or based on [8,23,25] and references therein. For brevity, we only present here the results obtained using data from the 1989-1999 season.

OLS Estimation
In most cases, visual comparison of the trajectory of the best fitting model obtained using OLS and the data points suggests that a good fit has been achieved (Figure 8(a)). The statistics that quantify the uncertainty in the estimated values of the parameters, however, indicate that this may not always be the case. In many cases, the standard errors are of the same order of magnitude as the parameter values themselves, indicating wide error bounds (see Table 6) and suggesting a lack of confidence in the estimates.
We should, however, interpret the statistical results with some caution because the residuals plots (Figure 8 The temporal correlation of the errors could represent some inadequacy in the way that the dynamic model describes the epidemic process, or an inadequacy in the data set itself. Another indication of problems in the estimation process comes from the matrix χ(θ OLS , n) T χ(θ OLS , n). The condition number of this matrix is 9.4 × 10 19 , indicating that the matrix is close to singular. Calculation of the covariance matrix requires the inversion of this matrix, and, as mentioned above, the asymptotic theory requires that the matrix Ω 0 , defined as a limit of matrix products of this form, has a non singular limit as n → ∞. A nearly singular matrix can arise when there is redundancy in the data or when there are problems with parameter identifiability [3,4].
It is interesting to observe that the estimated values of γ frequently fall on the boundary of the feasible region. This may impact the uncertainty analysis, given that the conditions of the asymptotic theory require that the true parameter value lies in the interior of the feasible region. If our estimates commonly fall on the boundary, this could be an indication that the true parameter value may not lie within our feasible region. We return to this issue below, in Section 6.4.

GLS Estimation
Visual inspection suggests that the model fits obtained using the GLS approach ( Figure 9) are even worse than those obtained using OLS. This is somewhat misleading, however, because the weights, defined as w j = 1/[z(t j ; θ)] 2 , mean that the GLS fitting procedure (unlike visual inspection of the figures) places increased emphasis on datapoints whose model value is small and decreased emphasis on datapoints where the model value is large. If these graphs are, instead, plotted with a logarithmic scale on the vertical axis, an accurate visualization is obtained ( Figure 10): multiplicative observation noise on a linear scale becomes constant variance additive observation noise on a logarithmic scale.
As before, however, the parameter estimates have standard errors that are often of the same order of magnitude as the estimates themselves ( Table 7). The residuals plots reveal The condition number of the matrix χ(θ GLS , n) T W (θ GLS )χ(θ GLS , n) is 9.0 × 10 19 . This is very similar to that for the OLS estimation, again suggesting caution in interpreting the standard errors.
The modified residuals versus model plot indicates that the 1/z(t j ;θ GLS ) 2 weights may have overcompensated for the non-constant variance seen in the OLS residuals. This suggests that it may be appropriate to use weights that vary in a milder fashion, such as 1/z(t j ;θ GLS ).
The model fits that result with these new weights appear, by visual inspection, to provide a more satisfactory fit to the data ( Figure 11). Standard errors for the parameter estimates are still large, however (Table 8). Our earlier comment concerning the difficulty of assessing the adequacy of GLS model fits by visual inspection should be borne in mind-see Figure 12 for a more accurate depiction in which square roots of the quantities are plotted so as to transform the errors in which variance scales with the model value into additive errors with constant variance.
The new modified residuals plots, which now focus on the quantities    A handful of surprisingly large modified residuals are seen on many occasions, although these often do not appear on our plots because we choose the range on the residuals axis so that the majority of the points can be seen most clearly. The locations of these residuals is noted on the figure caption; we see that they occur during the initial part of the time series, when the numbers of cases are low.

GLS Estimation Using Truncated Data Sets
It is quite plausible that our description of the error structure of the data is inadequate when the numbers of cases are at low levels. For instance, the reporting process might change as the outbreak starts to take hold (e.g., doctors become more alert to possible flu cases) or comes close to ending. Also, our model is deterministic whereas a real-world epidemic contains stochasticity.
Stochastic effects may exhibit a relatively large impact at the start or end of an epidemic, when the numbers of cases are low. It is possible for the infection to undergo extinction, a phenomenon which cannot be captured by the deterministic model. Spatial clustering of cases is also a distinct possibility, particularly during the early stages of an outbreak. This will affect the time course of an outbreak as well as the reporting process: clustering of cases may well increase the reporting noise if cases in a cluster tend to get reported together (e.g., a cluster occurs within an area where many isolates are sent to the CDC) or not reported together (e.g., a cluster occurs in an area that has poorer coverage in the reporting process). Both forms of the weights (inversely proportional to the square of the predicted incidence or inversely proportional to the predicted incidence) mean that errors at these small values have considerable impact on the cost function, and hence on the GLS estimation process, although this is less of a concern for the 1/z weights.
Another issue that has been raised by studies of parameter estimation in biological situations concerns redundancy in information measured when a system is close to its equilibrium [4]. This might be a relevant issue for the final part of the outbreak data as there is often a period lasting ten or more weeks when there are few cases.
We investigated whether the removal of the lowest valued points from the data sets would improve the fitting process. We constructed truncated data sets by considering only the period between the time when the number of isolates first reached ten at the start of the outbreak and first fell below ten at the end of the outbreak. As a notational convenience, we refer to the numbers of susceptibles and infectives at the start of the first week of the truncated data set as S 0 and I 0 , even though these times no longer correspond to the start of the influenza season.
(For example, in Figures 13 and 14, S 0 and I 0 refer to the state of the system at t = 8.) Comparing Tables 7 and 9, which arise from GLS estimation with 1/z 2 weights, we see Table 9: Estimation results from GLS, with weights 1/z(t j ; θ) 2 , applied to truncated influenza data set for season 1998-1999. that appears in Equation (35) to increase by 80%. The corresponding residuals plots (see Figure   13(b) and (c)) provide no evidence that the assumptions of the statistical model are invalid.
The condition number of the matrix A similar result is seen in the 1/z weights case. We remark that we no longer have the extreme outlier residuals. The condition number of the matrix χ(θ GLS , n) T W (θ GLS )χ(θ GLS , n) is 9.2 × 10 19 .
Truncating the data sets has helped considerably with the GLS estimation process, although the large condition numbers still are cause for caution with the standard errors.
Truncation of the data set had little effect on the parameter estimates obtained using OLS (results not shown), except that the values of S 0 and I 0 were changed because they refer to a later initial time, as discussed above. Standard errors for the OLS estimates were higher than for the full data set, as should be expected given the reduced number of data points.

Estimation for a Reduced Parameter Set
The preceding results indicate that there are difficulties in estimating the parameter γ, as witnessed by the number of situations in which the estimate lies on the boundary of the feasible parameter region. Because γ is the one parameter for which we can obtain reasonably reliable estimates without the need to fit a model to an incidence time series [8,23], we fix its value and investigate estimation for a reduced three parameter problem. In all of what follows, we apply the estimation methodology to the truncated data sets, as discussed in the previous section.
We use a fixed infectious period of four days, i.e., 1/γ = 4 days = 4/7 weeks, and estimate the parameter vector θ = (S 0 , I 0 ,β) using the OLS approach and the GLS approach with weights Estimation for the reduced parameter set leads to model fits that are not so different from those obtained using the full (p = 4) set of parameters in θ. For example, for the truncated data set from the 98-99 season, with weights equal to 1/z(t j ; θ), we have L(θ GLS ) = 38.72 for the full parameter set (Table 10) while L(θ GLS ) = 38.72 for the reduced parameter set (Table   13). The standard errors of the parameters, however, are smaller for the reduced parameter set:

Discussion
We have presented parameter estimation methodologies that, using sensitivity analysis and asymptotic statistical theory, also provide measures of uncertainty for the estimated parameters.
The techniques were illustrated using synthetic data sets, and it was seen that they can perform very well with reasonable data sets. Even within the ideal situation provided by synthetic data, potential problems of the approach were identified. Worringly, these problems were not apparent from inspection of the uncertainty estimates (standard errors) alone. However, these problems were revealed by examination of model fit diagnostic plots, constructed in terms of the residuals of the fitted model. These results argue strongly for the routine use of uncertainty estimation, together with careful examination of residuals plots when using SIR-type models with surveillance data.
The statistical methodology presented here only addresses the impact of observation error on parameter estimation. While the approach can handle different statistical models for the observation process, it does assume that we have a model that correctly describes the behavior of the system, albeit for an unknown value of the parameter vector. The methodology does not examine the effect of mis-specification of the model. It is well-known that this effect can dwarf the uncertainty that arises from observation error [24]. Examination of residuals plots, however, can identify systematic deviations between the behavior of the model and the data.
Application of the least squares approaches to the influenza isolate data gave mixed results.
Estimates of the effective reproductive number were in broad agreement with results obtained in other studies (see Table 14). While apparently reasonable fits were obtained in some instances, the uncertainty analyses highlighted situations in which visual inspection suggested that a good fit had been obtained but for which estimated parameters had large uncertainties. Residuals plots showed that error variance may not have been constant (i.e., observation noise was not simply additive), but more likely scaled according to either the square of the fitted value (i.e., relative measurement error) or the fitted value itself. The potentially large impact of errors at low numbers of cases on the GLS estimation process was clearly observed.
Temporal trends were observed in the residuals plots, indicative of systematic differences between the behavior of the SIR model and the data. Potential sources of these differences include inadequacies of the model to describe the process underlying the data and issues with the reliability of the data itself, particularly in the light of the health warning attached to the data by the CDC. (We emphasize, however, that our use of these data sets should be seen as only an illustration of the approach.) Table 14: Comparison between reproductive number estimates across studies of interpandemic influenza. In this table R 0 stands for the basic reproductive number (naive population), while max(R(t)) denotes the initial effective reproductive number in a non-naive population.
Studies of interpandemic influenza Estimates Bonabeau et al. [5] 1.70 ≤ R 0 ≤ 3.00 Chowell et al. [10] 1.30 ≤ max(R(t)) ≤ 1.50 Dushoff et al. [15] 4.00 ≤ R 0 ≤ 16.00 Flahault et al. [18] R 0 = 1.37 Spicer & Lawrence [28] 1.46 ≤ R 0 ≤ 4.48 Viboud et al. [31] 1.90 ≤ max(R(t)) ≤ 2.50 Sophisticated mathematical and statistical algorithms and analyses can be utilized to fit SIRtype epidemiological models to surveillance data. Good quality data is required if this approach is to be successful. In many instances, however, the available surveillance data is most likely inadequate to validate the SIR model with any degree of confidence. This is likely to be true in much of the modeling efforts for epidemics where the data collection process has inadequacies.