1 Introduction

The motivating idea behind model averaging is that, with various competing models at hand, each having its own strengths and weaknesses, it should be possible to combine the individual model forecasts into a single new forecast that, up to one’s favorite standard, is at least as good as any of the individual forecasts. As usual in statistical model building, the aim is to use the available information efficiently, and to construct a predictive model with the right balance between model flexibility and over-fitting. Viewed as such, model averaging is a natural generalization of the more traditional aim of model selection. Indeed, the model averaging literature has its roots in the model selection literature, which continues to be a very active research area with many applications in hydrology (see e.g., Wagener and Gupta 2005 and Ye et al. 2008). As a result of the steady increase in computer power, model averaging has gradually gained popularity as an alternative to model selection. For examples of recent applications of model averaging in hydrology, see, for instance, the contributions by Vrugt and Robinson (2007); Rojas et al. (2008) and Wöhling and Vrugt (2008).

Some model averaging techniques, such as equal weights averaging focus on point predictors, while others, such as Bayesian model averaging, are concerned with density forecasts. While the recent developments on density forecasts will undoubtedly prove extremely useful, and will find many practical applications, there will always be cases where accurate point predictors are desired. Since any density forecast has an associated point predictor, being the predictive mean of the density forecast, the question arises naturally which of the point predictors of the available averaging methods is most accurate. Because the recently developed sophisticated density forecast methods aim at obtaining accurate predictive densities, their point forecasts do not necessarily have to perform better than more traditional point forecast methods. For instance, Stockdale (2000) notes that simple averaging seems to have been the more successful approach, but also argues that more research is required.

With this in mind, the aim of this paper is to investigate the accuracy of a wide range of point predictors empirically, using hydrologic data from different case studies. Unknown model parameters are estimated using data from the calibration period, while the predictive performance is assessed using data from the subsequent evaluation period. The point predictors are obtained from a variety of model averaging techniques, and evaluated in terms of out-of-sample root mean squared prediction error (RMSE). Although other measures of predictive accuracy exist, the RMSE is natural in this context, since it is one of the objective functions that is being minimized (via maximization of the likelihood) during the calibration period for most of the density forecast methods in the literature.

Two hydrological case studies are considered. The first case study considers a classical forecasting problem in surface water hydrology, and involves rainfall-runoff modeling of the Leaf River watershed in Mississippi, USA, using a 36-year historical record of daily streamflow data. A set of 8 commonly used conceptual hydrologic models is used to predict and study streamflow dynamics. The second case study involves soil hydrology, and focuses on prediction of soil water flow through the vadose zone using data from a layered vadose zone of volcanic origin in New Zealand. A 7-member ensemble of soil hydraulic models is used to predict tensiometric pressure heads at multiple depths. The data sets, models, and ensembles used in the two case studies have been described elsewhere (Vrugt and Robinson 2007; Wöhling and Vrugt 2008), and details can be found there.

The point forecasts considered are based on the following model averaging techniques: equal weights averaging (EWA) where each of the available models is weighted equally, Bates-Granger averaging (BGA) (Bates and Granger 1969), AIC and BIC-based model averaging (AICA and BICA, respectively) (Buckland et al. 1997; Burnham and Anderson 2002; Hansen 2008), Bayesian model averaging (BMA) (Raftery et al. 1997; Hoeting et al. 1999; Raftery et al. 2005), Mallows model averaging (MMA) (Hansen 2007; Hansen 2008) and weights equal to the ordinary least squares (OLS) estimates of the coefficients of a multiple linear regression model, as first suggested in the forecasting context by Granger and Ramanathan (1984), and referred to here as Granger-Ramanathan averaging (GRA). Note that some of these model averaging techniques only allow positive weights, summing to one (weights on the simplex). For comparison, whenever feasible, models for which the weights are usually restricted to the simplex were also estimated without this restriction.

The paper is organized as follows. Section 2 introduces the concept of model averaging, considers bias removal and provides a concise description of the various model averaging strategies considered here. Some statistical properties of these methods, in particular convergence to the optimal predictor, are considered briefly in Sect. 3. In Sect. 4 we describe the two hydrologic case studies, as well as the data and models used, and summarize the results for the different averaging procedures. The RMSE and optimized weights of the individual models of the ensemble are the primary focus of that section. Section 5 provides a discussion of our results against the background of the statistical properties of the model averaging methods.

2 Model averaging

Let us denote by {Y t } a sequence of measurements of a hydrological quantity of interest, such as a sequence of daily discharge data. Further assume that there is an ensemble of k competing forecasts available, which constitutes the predictions of k competing models, with associated point forecasts X i,t, \(i=1, {\ldots}, k,\) for each variable Y t , in terms of the information available (i.e., all variables available to construct a forecast of Y t ). We focus on predicting Y t (e.g., streamflow (case study 1) or soil water pressure head (case study 2) at time t) based on the information available at the start of that period.

For hydrologic applications, the models that make up the ensemble could range from simple water balance models to parsimonious conceptual watershed models, and increasingly complex fully integrated three-dimensional physically based models. Prior to our analysis, the various hydrologic models of the ensemble were calibrated through global optimization using a least squares objective function containing the error residuals between measured and predicted daily streamflow (case study 1), and tensiometric pressure head (case study 2). These variables are also subject of interest in all our averaging studies herein. The predictions of the various ensemble members can therefore be considered optimal in the least squares sense, and no further reductions in RMSE are possible by parameter tuning.

A bias correction step of the individual forecasts is performed prior to the construction of the weights, as follows. Each of the individual forecasts is adjusted by applying a linear transformation of the form \(X'_{i,t}=a_{i}+b_{i} X_{i,t}.\) The coefficients a i and \(b_i, i=1, {\ldots}, k,\) for each of the models are found by OLS estimation using only the observations in the calibration set. Typically this bias correction leads to a small improvement of the predictive performance of the individual models, with a i close to zero and b i close to 1. Only when the calibration set is very small, the OLS estimates underlying GRA becomes noisy, and bias correction may destabilize the ensemble. This has been demonstrated in the context of streamflow forecasting in Vrugt and Robinson (2007). In the case studies we used a bias correction, although for comparison we also provide RMSE values obtained without bias correction. Subsequently, for notational simplicity, the symbol X i,t is used to indicate either an uncorrected or a bias-corrected predictor of Y t , based on model i.

A popular way to combine point forecasts is to consider the following linear model combining the individual predictions:

$$ Y_t=\user2{X}_t^{\rm T} \varvec{\beta} + \varepsilon_t = \sum_{i=1}^k \beta_i X_{i,t} + \varepsilon_t, $$
(1)

where {ɛ t } is a white noise sequence, which will be assumed to have a normal distribution with zero mean and unknown variance. In hydrological case studies one typically uses a split sample test to divide the available data in a calibration and evaluation time series. The parameter vector \(\varvec{\beta}\) is estimated from the calibration set, and the evaluation data set is used to determine the consistency and robustness of the calibrated model. We will follow the same approach in this paper, and use the RMSE during the evaluation period as a measure of out-of-sample predictive performance.

As indicated above, this paper focuses on comparing point predictors in terms of the out-of-sample prediction error. The point forecasts associated with model (1) are

$$ \tilde{Y}^{\varvec{\beta}}_{t} = \user2{X}_t^{\rm T} \varvec{\beta} = \sum_{i=1}^k \beta_i X_{i,t}. $$
(2)

We will compare a wide range of competing model averaging techniques proposed in the literature so far. These include simple methods such as equal weights averages and BGA weights, as well as more complex methods such as AICA, BICA, GRA, BMA, and MMA. We next describe each of these methods in some detail, and give expressions for the value of \(\varvec{\beta}\) used by the methods. These are denoted by \(\hat{\varvec{\beta}}_{\bullet},\) where the subscript indicates the averaging method used. Table 1 summarizes the main properties of each of the model averaging techniques considered.

Table 1 Main characteristics of the model averaging methods considered

2.1 Equal weights averaging

Under equal weights averaging, the combined forecast is simply obtained by giving each of the models equal weight, i.e., \( {\hat {\varvec{\beta}}}_{\rm EWA}=\left(\frac{1}{k}, {\ldots}, {\ldots}, \frac{1}{k}\right),\) which has associated predictor \(\tilde Y^{\hat {\varvec{\beta}}_{\rm EWA}}_t = \frac{1}{k} \sum\nolimits_{i=1}^k X_{i,t}.\) Note that if no bias correction is used, this predictor does not even depend on the data in the calibration set (the bias correction does depend on the data in the calibration set).

2.2 Bates-Granger averaging

A well-known choice, proposed by Bates and Granger (1969), is to weight each model by \(1/\sigma^2_i\), where \(\sigma^2_i\) is its forecast variance. If the models’ forecasts are unbiased and their errors uncorrelated, these weights are optimal in the sense that they produce predictors with the smallest possible RMSE. In practice the forecast variance is unknown and needs to be estimated. This leads to the choice

$$ {\hat \beta}_{{\rm BGA},i} = \frac{1/{\hat \sigma}^{2}_i}{\sum_{j=1}^k 1/{\hat \sigma}^{2}_j}, $$

where \({\hat \sigma}^2_i\) denotes the forecast variance of model i, which we estimated as the sample variance of the forecast error e i,tX i,t − Y t within the calibration period.

2.3 Information criterion averaging

Buckland et al. (1997) and Burnham and Anderson (2002) proposed using weights of the form

$$ {\hat \beta}_{i} = \frac{\exp\left(-I_i/2 \right)}{\sum_{j=1}^k\exp\left(-I_j/2 \right)}, $$
(3)

where I i is an information criterion describing the fit of the model, of the form \(I_i =-2 \log(L_i)+q(p_i)\), where L i is the (maximized) likelihood of model i, and q(p i ) is a penalty increasing in the number of parameters, p i , that need to be estimated for model i. The cases considered here are Akaike’s information criterion (AIC), with penalty q(p) = 2p, and the Bayes information criterion (BIC), with penalty \(q(p)=p \log(n)\), where n is the calibration sample size. We refer to the model averaging scheme (3) based on AIC and BIC as AICA and BICA, respectively, and to their respective \(\varvec{\beta}\)-values as \(\hat{\varvec{\beta}}_{\rm AICA}\) and \(\hat{\varvec{\beta}}_{\rm BICA}\). In the literature these methods are sometimes referred to as smooth AIC and smooth BIC, respectively. To evaluate the information criteria numerically, it is convenient to assume, as we do here, that the errors of the individual models are normally distributed. In that case the likelihood for model i is related to its estimated forecast error \({\hat \sigma}^2_i\), via \(-2 \log(L_i) = n \log {\hat \sigma}^2_i + n\).

2.4 Granger-Ramanathan averaging

The weighting schemes described above do not exploit covariance structure that may be present in the forecast errors. The predictors described above are weighted averages of the individual forecasts, with weights determined by a measure of fit for the individual models. A natural way to exploit the presence of covariances is by using OLS estimators within the linear regression model.

This OLS approach to forecast combination was suggested by Granger and Ramanathan (1984). They suggested using OLS to estimate the unknown parameters in the linear regression model. The OLS estimator of the parameter vector \(\varvec{\beta}\) of the linear regression model (1) is

$$ {\hat {\varvec{\beta}}}_{\rm GRA} = \left(X^{\text{T}} X\right)^{-1} X^{\rm T} Y, $$
(4)

where X and Y stand for the matrix of X-values, and the vector of Y-values in the calibration set, respectively. Under the standard assumptions underlying the classical linear regression model the OLS estimator can be shown to be the best linear unbiased estimator of \(\varvec{\beta}.\) Even if some of these assumptions do not hold (e.g., if there are missing X-variables) the OLS estimates may still be shown to converge to the pseudo-true parameter \(\varvec{\beta}^*,\) corresponding to the best linear model of Y t (in the RMSE sense) in terms of \(X_{1,t}, {\ldots}, X_{k,t}.\) Since GRA is based on these OLS weights, it should be expected to be a serious competitor for the other model averaging techniques considered here.

2.5 Bayesian model averaging

Hoeting et al. (1999) provide an excellent overview of the various BMA techniques that have been proposed in the literature. Applications of BMA in hydrology and meteorology have been described by Neuman (2003); Ye et al. (2004); Raftery et al. (2005); Gneiting et al. (2005); Vrugt and Robinson (2007); and Vrugt et al. (2008). See Bishop and Shanley (2008) for a recent contribution towards improving BMA’s performance when confronted with data coming from extreme weather conditions.

Depending on the type of application one has in mind, different flavors of BMA are more suitable. For instance, it makes a crucial difference whether one would like to combine point forecasts (in which case some forecasts may be assigned negative weights) or density forecasts (in which case allowing for negative weights could lead to the undesired effect of negative density forecasts). The different BMA methods considered in this paper, being BMA in the finite mixture model and BMA in the linear regression model, correspond to averaging density forecasts and making linear combinations of forecasts, respectively.

2.5.1 BMA in the finite mixture model

In case one wishes to combine density forecasts f i,t(y), \(i=1, {\ldots}, k,\) for Y t , it is common to consider the combined forecast density \(g_t(y) =\sum_{i=1}^k {\beta_i}f_{i,t}(y)\), known as a finite mixture model. To ensure that g t (y) represents a density, the BMA weights β i are assumed to be non-negative and to add up to one: i.e., the weights are constrained to the simplex \(\Updelta^{k-1} = \{\varvec{\beta}| \beta_i \geq 0, i=1, {\ldots}, k \hbox{ and } \sum\nolimits_{i=1}^k \beta_i =1\}.\) The BMA predictive density of Y t thus consists of a mixture of normals, located at the individual point forecasts X i,t, with weight β i and variance σ 2 i . The point predictors associated with this forecast are again given by (2), where X i,t coincides with the predictive mean \(\int_{-\infty}^{\infty} f_{i,t}(x) {\rm d} x.\)

For this finite mixture model, Raftery et al. (2005) define the BMA weights to be those weights which maximize the likelihood based on the data available for estimation (the calibration data). Here, as is often done in the literature, the mixture densities are assumed to be centered around X i,t with a normally distributed noise with a fixed unknown variance σ 2 i , \(i=1, {\ldots}, k,\) i.e., \(f_{i,t}(Y_t) = (2 \pi \sigma^2_i)^{-\frac{1}{2}} \exp(-(Y_t - X_{i,t})^2/(2 \sigma^2_i)).\)

We use the short-hand notation BMAmix to indicate the BMA method in the context of the finite mixture model. The joint estimators of \(\varvec{\beta}\) and \(\varvec{\sigma} = (\sigma_1, \ldots, \sigma_k)^{\prime}\) are given by

$$ \left(\hat{\varvec{\beta}}_{{\rm BMA}_{\rm mix}}, \hat{\varvec{\sigma}} \right) = \mathop{\arg\max} \limits_{\varvec{\beta} \in \Updelta^{k-1}, \varvec{\sigma} \in {\mathbb{R}}_+^k} \sum_{t=1}^n \log g_t(Y_t) = \mathop{\arg\max} \limits_{\varvec{\beta} \in \Updelta^{k-1}, \varvec{\sigma} \in {\mathbb{R}}_+^k} \sum_{t=1}^n \log \left(\sum_{i=1}^k f_{i,t}(Y_t) \right), $$

where n denotes the calibration sample size.

The estimation of the model weights and model variances σ2 i is a moderately complex nonlinear optimization problem. One way to approach this is by using the expectation-maximization algorithm, as suggested by Raftery et al. (2005). In fact, any other reliable optimization routine can be used for the purpose of the present paper. We follow Vrugt et al. (2008) and use the DiffeRential Evolution Adaptive Metropolis (DREAM) adaptive Markov chain Monte Carlo (MCMC) algorithm, proposed in Vrugt et al. (2009), for obtaining the model weights and the individual model variances. For a given calibration data set, the likelihood function was optimized numerically by sampling from the Bayesian posterior distribution with a flat prior (i.e., from a density in parameter space which is a normalized version of the likelihood function), and identifying the point in the MCMC sample for which the likelihood of the finite mixture model was maximized. We used the standard algorithmic settings of DREAM, sampling a total of 2 × 105 values of β for each case, since we found that the solutions obtained with this number of iterations were sufficiently accurate as judged from comparing independent runs of the algorithm.

2.5.2 BMA in the linear regression model

Bayesian model averaging in the context of the linear regression model (1) has been discussed by Raftery et al. (1997). This particular BMA approach does allow for the model weights to become negative. The associated point predictor in the case of a flat prior reduces to the point predictor of the maximum likelihood estimator, which happens to be the OLS estimator (4). Although this theoretical argument shows that formally the BMA version of the linear model is redundant in our set of model averaging approaches, we have included it nevertheless, to provide an extra check on our numerical procedures. We refer to the BMA method in the context of the linear regression model by using the subscript ‘lin’ (for ‘linear’), and an additional superscript ‘Δ’ in case the weights are restricted to the simplex Δk−1. For instance, \(\hbox{BMA}_{\rm lin}^{\Updelta}\) refers to BMA in the linear regression model (1), with \(\varvec{\beta}\) on the simplex.

The BMA weights in the linear regression model are obtained by maximizing the Bayesian posterior density for a flat prior, that is

$$ \left(\hat{\varvec{\beta}}_{{\rm BMA}^{s_{\mathcal H}}_{\rm lin}}, \sigma \right) = \mathop{\arg\max} \limits_{\varvec{\beta} \in {\mathcal{H}}, \sigma \in {\mathbb{R}}_+} \sum_{t=1}^n \log h_t(Y_t), $$

\({\hbox{where}}\;h_t(Y_t)=(2 \pi \sigma^2)^{-\frac{k}{2}} \exp \left(-(Y_t - \sum_{i=1}^k \beta_i X_{i,t})^2/(2 \sigma^2) \right) \), and \({\mathcal{H}}\) is the set of \(\varvec{\beta}\)-values under consideration, which is either \({\mathbb{R}}^k\) (empty superscript \(s_{{\mathcal{H}}}\)) or Δk−1 (\(s_{{\mathcal{H}}}\,=\,`\Updelta\hbox{'}\)) in this paper. The maximization was again performed using the DREAM algorithm.

2.6 Mallows model averaging

On the other side of the spectrum considerable effort has been made to come up with a frequentist solution to the problem of model averaging. Hjort and Claeskens (2003); Claeskens and Hjort (2003) made noticeable contributions in this direction. The general tendency in this literature is that there is no unique best model, but that the best model is a subjective notion that depends on one’s objective. For each objective, a different model may be optimal. Hansen (2007, 2008) has continued this line of research with his proof that model combination based on Mallows criterion, asymptotically leads to forecasts with the smallest possible mean squared error. For a more generally valid proof, which covers the context of the linear regression model (1) considered here, see Wan et al. (2010).

The MMA criterion is the penalized sum of squared residuals (across the calibration data)

$$ C_n(\varvec{\beta}) = \sum_{t=1}^n \left(Y_{t} - \varvec{\beta}^{\prime}\user2{X}_t \right)^2 + 2 \sum_{j=1}^k \beta_j p_j S^2 $$
(5)

where, as before, p j is the number of parameters of model j (i.e., a measure of the complexity of model j), and S 2 is an estimate of the variance σ2 of ɛ t in (1) based on the most complex model considered. In this study S 2 was taken to be the smallest observed RMSE for any individual model, among the set of models.

The Mallows criterion is

$$ \hat{\varvec{\beta}}_{\rm MMA^{s_{{\mathcal{H}}}}} = \mathop{\arg\min} \limits_{\varvec{\beta} \in {\mathcal{H}}} C_n(\varvec{\beta}), $$

where \({\mathcal{H}}\) is the set of parameters under consideration (\({\mathbb{R}}^k\) or Δk−1). The value of \(\hat{\varvec{\beta}}_{\rm MMA}\), i.e., the value of \(\varvec{\beta}\) for which (5) is minimized, is found again using the DREAM algorithm. With the DREAM algorithm we generate a sample of size 2 × 105, as in the case of the BMA algorithm, from the probability density function \(h(\varvec{\beta})\) in \(\varvec{\beta}\)-space, where

$$ h(\varvec{\beta}) \propto {\rm e}^{- \frac{1}{2} C_n(\varvec{\beta})}. $$

Like in the BMA implementation, the maximum of this function is found by identifying the point \(\varvec{\beta}_i\) in the MCMC sample for which \(h(\varvec{\beta}_i)\) was largest.

3 Consistency and asymptotic RMSE optimality

This section briefly summarizes some of the statistical properties of the model averaging methods described above. The proofs are omitted as they are standard results in statistics. The aim of this section is to anticipate the types of results to expect in the case studies presented in the next section, and to facilitate their discussion afterwards.

In Sect. 3.1 we consider asymptotics with a fixed number, k, of ensemble members, and an increasing calibration size n. Asymptotics where the number of competing models increases with the calibration sample size are discussed in Sect. 3.2.

3.1 Asymptotics with k finite and fixed

We assume the vector-valued time series \(\{\user2{Z}_t \equiv (Y_t, \user2{X}_t^{\prime})^{\prime}\}, t \in {\mathbb{Z}}\) to be strictly stationary, with finite variances. The best linear predictor of Y t is then defined as

$$ \varvec{\beta}^* = \mathop{\arg \min}_{\varvec{\beta} \in {\mathbb{R}}^k} {\mathbb{E}}\left[ (Y_t - {\user2{X}}_t^{\prime} \varvec{\beta})^2 \right], $$

where \({\mathbb{E}}\) denotes the mathematical expectation operator.

An estimator \(\hat{\varvec{\beta}}_n\) of \(\varvec{\beta}^*\) is consistent if it converges to \(\varvec{\beta}^*\) in probability as the calibration sample size n increases: \({\mathbb{P}}(\|\varvec{\beta}^* - \hat {\varvec{\beta}}_n < \varepsilon) \mathop{\longrightarrow} \limits_{n \to \infty} 1,\) for any ɛ > 0, where \(\| \cdot \|\) denotes the Euclidean norm (or an equivalent norm) in \({\mathbb{R}}^k.\) A model averaging method based on a consistent estimator of \(\varvec{\beta}^*\) is asymptotically RMSE optimal in that the expected out-of-sample RMSE converges to the lowest possible value, σ2; the variance of the part of Y t that cannot be explained by any linear combination of the X-es.

Three of the model averaging methods described above are asymptotically RMSE optimal: GRA, BMAlin, and MMA. GRA is exploiting a consistent (OLS) estimator and hence is asymptotically RMSE optimal. Because the OLS estimators coincide with maximum likelihood estimators in linear models with normal errors, BMAlin with a flat prior is formally equivalent to GRA. This means that apart from numerical inaccuracy associated with the optimization routine used, BMAlin as we implemented it, with a flat prior, is equivalent to GRA. The consistency of Bayesian point estimators guarantees that BMAlin with a prior distribution with \(\varvec{\beta}^*\) in its support is asymptotically RMSE optimal. Finally, the objective function of MMA and GRA differ only by a model dependent finite penalty, independent of sample size. Since the penalty plays no role asymptotically, MMA is asymptotically equivalent to GRA, and therefore also asymptotically RMSE optimal.

Equal weights averaging is only asymptotically RMSE optimal in the very special case that the optimal weight vector happens to be \((\frac{1}{k}, {\ldots}, \frac{1}{k}).\) Likewise, methods that are confined to the simplex Δk–1 are by definition not asymptotically RMSE optimal if \(\varvec{\beta}^*\) lies outside the simplex. \(\hbox{BMA}_{\rm lin}^{\Updelta}\) and MMAΔ inherit the asymptotic RMSE optimality from BMAlin and MMA, respectively, if \(\varvec{\beta}^* \in \Updelta^{k-1}.\)

BMAmix, AICA, and BICA are not asymptotically RMSE optimal, even if \(\varvec{\beta}^* \in \Updelta^{k-1}.\) Because the likelihood functions of the finite mixture model and in the linear regression model are different, the weights underlying BMAmix converge in probability to a value which is generally different from \(\varvec{\beta}^*.\) AICA and BICA are not asymptotically RMSE optimal, because the penalties q(p i ) are fixed while \(\log(L)\) grows, on average, linearly with sample size. Assuming that no two ensemble members have exactly the same average likelihood, the weight vector \(\user2{w}\) converges (in probability) to the single model in the ensemble that exhibits the highest average log likelihood per sample point (the best performing individual model).

3.2 Asymptotics with k depending on n

The result that GRA and BMAlin were found to be asymptotically RMSE optimal in Sect. 3.1, depends crucially on the assumption that k was fixed as the sample size increases. Hansen (2007, 2008) introduced MMA in a context where the number of competing models, k = k(n), is allowed to grow with the sample size, and showed that MMA converges to the optimal model asymptotically. Under these asymptotics, GRA and BMAlin would typically fail to converge to the optimal model, due to the lack of a penalty for model complexity.

For the case studies one can envisage three possible scenarios a priori: either the fixed k asymptotic theory applies, or Hansen’s asymptotics, or neither. If the number of model parameters k is sufficiently small, and the calibration sample size large, one should expect fixed k asymptotics to apply. If k is too large compared to the sample size Hansen’s asymptotics apply. If the calibration sample set is too small neither asymptotics will work. Unfortunately, theory alone does not dictate which asymptotic theory applies in practice. This also depends on the type of data at hand, and the sample sizes used for determination of the weights of the individual models.

Note that fixed k asymptotic theory predicts that MMA, GRA, and BMAlin perform comparably well for large calibration samples, while Hansen’s asymptotic theory predicts that MMA is superior for large calibration samples. The main purpose of the case studies presented in the next section is to find out, by analyzing of out-of-sample prediction error, whether fixed k asymptotics, Hansen’s asymptotics, or neither, apply in practice.

4 Case studies

In this section we compare the different model averaging strategies in terms of their ability to improve the forecast error of two hydrologic systems. The first case study considers daily streamflow forecasting of the Leaf River watershed in Mississippi using an 8-member ensemble of conceptual watershed models. This study deals with a variable that is highly skewed and perhaps not well described by a normal distribution. In the second case study we use seven different soil hydraulic models for forecasting of tensiometric pressure heads in a layered vadose zone in New Zealand.

4.1 Streamflow data

In this study, we apply the different model averaging approaches to probabilistic ensemble streamflow forecasting using historical data form the Leaf River watershed (1950 km2) located north of Collins, Mississippi. In a previous study Vrugt and Robinson (2007) generated a 36-year ensemble of daily streamflow forecasts using eight different conceptual watershed models involving the ABC (3) (Fiering 1967; Kuczera and Parent 1998), GR4J (4) (Perrin et al. 2003), HYMOD (5) (Boyle et al. 2001; Vrugt et al. 2002; Vrugt et al. 2005), TOPMO (8) (Oudin et al. 2005), AWBM (8) (Boughton 1993; Marshall et al. 2005), NAM (9) (Nielsen and Hansen 1973), HBV (9) (Bergström 1995), and SAC-SMA (13) (Burnash et al. 1973). These eight models are listed in order of increasing complexity, and the number of user-specified parameters is indicated in parentheses. Inputs to the models include mean areal precipitation (MAP), and potential evapotranspiration (PET). The output is estimated channel streamflow. In this study, the first 3000 data points on record (roughly corresponding to the first 8 years of data, WY 1953–1960) are used for calibration of the parameters in the various hydrologic models.

The Y-variable, Y t , is taken to be the observed river discharge, while the vector of X-variables comprises an ensemble of one-day-ahead predictors for Y t constructed with eight different models, with respective number of parameters 3, 4, 5, 8, 8, 9, 9, and 13. Previous studies have indicated that a calibration data set of approximately 8–11 years of data, representing a range of hydrologic phenomena (wet, medium, and dry years), is desirable to achieve deterministic model calibrations that are consistent and generate good verification and forecasting performance, see e.g., Yapo et al. (1996) and Vrugt et al. (2006b). In this study we systematically vary the length of the calibration data set by using an increasing number of data before the end of the full calibration data set of 3000 observations. The remaining 28 years on record (WY 1961–1988, 10,500 observations) are subsequently used to evaluate the forecast performance.

Table 2 shows the estimates of the weights, as well as the RMSE measured across the evaluation period for each of the forecast methods, based on the full calibration data set (3000 data points). Using OLS for the evaluation period, it is possible to find the weights of the linear regression model that are, ex post, optimal for the specific evaluation data at hand. The corresponding model weights for this best model are given in the row indicated by ‘βopt’. For comparison, the performance of the forecasts of the individual ensemble members is shown in the last eight rows.

Table 2 Streamflow data results

By comparing the RMSE values displayed in the before-last column of Table 2, it can be seen that not all model averaging methods produce forecasts that are more accurate than the best performing individual model of the ensemble. Specifically, the predictive performance of BMAmix, MMAΔ, BGA, and EWA is worse than that of the best ensemble member, SAC-SMA, indicated in the table by X 8. AICA and BICA, on the contrary, put all their weight on the best ensemble member, and therefore exhibit similar predictive capabilities as this best member. \(\hbox{BMA}_{\rm lin}^{\Updelta}\) with weights restricted to the simplex beats the best individual ensemble member, but performs worse than its unrestricted counterpart BMAlin. Unrestricted BMAlin and MMA, and GRA each perform very well, with practically identical values of the weights. Note that these three best performing methods all have unrestricted weights, the estimations of which have the same signs as the optimal weights, given in row ‘βopt’. The last column, denoted ‘RMSE*’, shows the RMSE values obtained without performing the bias correction step prior to model averaging. The RMSE values are slightly larger without a bias correction, but the pattern is very similar.

To assess the effect of the length of the calibration data set, the length of the calibration data set was progressively increased to include more past data in the calibration period, while keeping the evaluation period fixed. To obtain a convenient scale for plotting the results, we focus on the difference between the out-of-sample RMSE and the minimal RMSE (achieved for βopt), denoted by ΔRMSE. We refer to ΔRMSE as the ‘excess prediction error’.

Figure 1 displays ΔRMSE for increasing lengths of the calibration data set. The graph clearly indicates that EWA and BGA are performing poorly relative to the other methods, regardless of the length of the calibration period. The performance of AICA and BICA in this case study is equal for all calibration sample sizes; a result of the fact that these methods essentially selected the same weight vector throughout, putting all weight on the model that performed best in the calibration period. For smaller calibration sample sizes it is particularly difficult to find a single method that is consistently superior to the other averaging strategies. However, for larger calibration data sets (n = 3,000) BMAlin, MMA, and GRA perform best, exactly as predicted by fixed-k asymptotic theory.

Fig. 1
figure 1

Streamflow data. Excess prediction error ΔRMSE for increasing calibration sample size (logarithmic scales)

4.2 Soil water pressure head data

The second case study concerns tensiometric pressure head data in a layered vadose zone of volcanic origin. These field data from the Spydia experimental site in the northern Lake Taup catchment, New Zealand, consist of water pressure measurements taken at various depths. These data have been described in detail by Wöhling et al. (2008). The dataset originally contained tensiometric pressure measurements at five different depths. In this case study we focus on the pressure data at the smallest depth (0.4 m), for which the data are the most dynamic.

Ensemble forecast combinations for these data have been studied by Wöhling and Vrugt (2008). They considered hourly ensemble forecasts based on seven different soil hydraulic models, listed here with their respective number of parameters in brackets: modified Mualem-van Genuchten (MVG), nonhysteric (15), MVG, hysteric (24), Brooks and Corey (21), Kosugi (24), Durner (15), Šimùnek et al. (15), and non-hysteric MVG model with four soil horizons (20). This latter model is an extension of the first MVG soil hydraulic model used in this study, but uses four instead of three soil horizons to further improve the vertical description of moisture flow and uptake in the unsatured zone. This model contains five additional parameters that define the hydraulic functions for this fourth horizon. These seven soil hydraulic models encompass not only different formulations of the same physical relationships but also different conceptual models.

A detailed description of each of these seven models has been given in Wöhling and Vrugt (2008) and so will not be repeated here. Here, the same evaluation period as in Wöhling and Vrugt (2008) is used (the last 2,301 observations). The length of the calibration data set is varied by using calibration data sets of various lengths at the end of the period of the first 6,769 observations, which were used as the calibration data set by Wöhling and Vrugt (2008).

Table 3 displays the estimated weights, as well as the RMSE measured across the evaluation period for each of the forecast methods, based on the full calibration set (6,769 observations). The layout is similar to that of Table 2, with the main difference that there are now seven ensemble members instead of eight.

Table 3 Tensiometric pressure head data, depth 0.4 m

Inspection of the RMSE values shows that EWA and BGA again perform worse than the best performing individual model (MVG, hysteric, denoted by X 2 in Table 2). However, unlike for the streamflow data, now BMAmix and MMAΔ perform slightly better than this simple benchmark. AICA and BICA are borderline cases, effectively putting all weight on the best performing individual model. Within the groups BMAlin and MMA, the versions with unrestricted weights again perform better than the restricted version. Unrestricted BMAlin and MMA, and GRA all perform best, with practically identical weight vectors. An important difference with the previous case study is that these three best performing methods now do not have weights with the same signs as the optimal weights (row ‘βopt’). This indicates that the estimated weights are not optimal, which suggests that there is a structural difference between the calibration and the evaluation periods. This can easily happen, even for stationary time series data, if the calibration sample is too small. This is not unlikely to occur in hydrological applications. Note however that the excess prediction errors for BMAlin, MMA and GRA are smaller than that of the best performing single model, and hence model averaging still is able to reduce forecast errors in such a case. Judging from the RMSE*-values reported in the last column, similar results would have been obtained if no bias correction would have been used.

The effect of the length of the calibration data set was again assessed by progressively increasing the length of the calibration data set, while keeping the evaluation period fixed. Figure 2 displays the excess prediction error (ΔRMSE) for increasing lengths of the calibration data set. The results are in line with the previous case study. It can be observed that EWA and BGA perform poorly, particularly for larger calibration period. Except for the smaller calibration sample sizes, the performances of AICA and BICA are equal (beyond n = 2,000). Again, for the smaller calibration sample sizes no obvious winning method can be distinguished, while for the largest calibration sample size considered (n = 6,769) BMAlin, MMA and GRA perform best, and almost identically, as predicted by fixed k asymptotic theory.

Fig. 2
figure 2

Tensiometric pressure head data. Excess prediction error ΔRMSE for increasing calibration sample size (logarithmic scales)

5 Discussion

Our case studies clearly indicate a couple of important conclusions for model averaging. Perhaps the most striking observation is that the methods that imposed the restriction that the model weights should be positive were performing worst. None of the three best methods had this restriction. We conclude that not only there are theoretical arguments for allowing negative weights in general, but the empirical results suggest that in surface and subsurface hydrologic case studies the best forecasts may have weights outside the simplex.

Another main conclusion is that, for the particular data sets considered, a relatively simple model averaging method, GRA, based on OLS regression, is shown to perform just as well as very sophisticated competing model averaging methods, such as BMA and MMA. The gain obtained from using GRA rather than BMA or MMA can be significant; analytical solutions exist to determine the weights for GRA, whereas computationally more demanding, iterative procedures, such as expectation-maximization or MCMC sampling with DREAM are required to find the optimal values of the weights for BMA and MMA.

The result that these methods were found to perform well here arguably depends on choosing the RMSE as a measure of performance. As indicated in the introduction the RMSE is a widely used measure for evaluating the accuracy of point forecasts. Moreover, the various model averaging strategies used herein are developed within the context of minimizing the squared prediction error. The only exception is BMAmix, whose weights are calibrated by posing the calibration problem in a maximum likelihood context. One might use different evaluation criteria, such as mean absolute error, yet to make a fair comparison of the different model averaging methods would then require the explicit use of these criteria within the calibration set. In that case we would be introducing new model averaging methods altogether, which is beyond the scope of this paper. Another possibility might be to use various calibration criteria simultaneously and interpret the Pareto solution set using multi-criteria optimization methods. Such an approach within the context of BMAmix has previously been presented in Vrugt et al. (2006a).

For increasing calibration sample size, it could be observed that the three asymptotically RMSE optimal model averaging methods, BMA, MMA, and GRA, quickly dominated the other methods, while converging towards the same common small forecast error, just as asymptotic theory with fixed finite k predicts. That this would be the case could not be known a priori because the finite sample size n for which asymptotics work well depends on the type of data under consideration. We can therefore conclude that the calibration sample sizes used typically in hydrology, are large enough, relative to k, to justify the use of predictions from asymptotic theory with a fixed number of ensemble members, k, and large sample size, n.

By using the ex post optimal model parameter \({\varvec{\beta}}_{\rm opt}\) for the evaluation sample it could be observed that the errors were not evenly distributed across the data. For the streamflow data, for instance, larger flows had larger prediction errors. Although we have not performed a quantitative study since we have focused on batch-optimization in the case studies, from graphical inspection of the data and the predictors it appeared that the best performing GRA method was specifically outperforming the other models during the high-flow periods. For the pressure head data similar observations could be made.

A final conclusion that can be drawn from our results is that BMAmix, which is increasingly being used in hydrology for obtaining density forecasts while taking into account model uncertainty, has point predictors that are not asymptotically optimal. Unfortunately, the asymptotically optimal GRA weights do not easily allow for the construction of a density forecast, because the weights are allowed to be negative. It therefore seems desirable to develop density forecasts that represent model uncertainty accurately, while at the same time having associated point predictors achieving a close to minimal prediction error. We leave the development of such a model averaging method for future research.