Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It

We empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data are heteroskedastic, and observe that the posterior puts its mass on ever more high-dimensional models as the sample size increases. To remedy the problem, we equip the likelihood in Bayes' theorem with an exponent called the learning rate, and we propose the Safe Bayesian method to learn the learning rate from the data. SafeBayes tends to select small learning rates as soon the standard posterior is not `cumulatively concentrated', and its results on our data are quite encouraging.


Introduction
The Problem We empirically demonstrate the inconsistency of Bayes factor model selection, model averaging and Bayesian ridge regression under model misspecification on a simple linear regression problem with random design. We sample data (X 1 , Y 1 ), (X 2 , Y 2 ), . . . i.i.d. from a distribution P * , where X i = (X i1 , . . . , X ipmax ) are high-dimensional vectors, and we allow p max = ∞. We use nested models M 0 , M 1 , . . . where M p is a standard linear model, consisting of conditional distributions P (· | β, σ 2 ) expressing that is a linear function of p ≤ p max covariates with additive independent Gaussian noise i ∼ N (0, σ 2 ). We equip each of these models with standard priors on coefficients and the variance, and also put a discrete prior on the models themselves. M := p=0..pmax M p does not contain the conditional 'ground truth' P * (Y |X) (hence the model is 'misspecified'), but it does contain aP that is 'best' in several respects: it is closest to P * in KL (Kullback-Leibler) divergence, it represents the true regression function (leading to the best squared error loss predictions among all P ∈ M) and it has the true marginal variance (explained in Section 2.3). Yet, whileP ∈ M 0 and M 0 receives substantial prior mass, as n increases, the posterior puts most of its mass on complex M p 's with higher and higher p's, and, conditional on these M p 's, at distributions which are very far from P * both in terms of KL divergence and in terms of L 2 risk, leading to bad predictive behavior in terms of squared error. Figure 1 and 2 illustrate a particular instantiation of our results, obtained when X ij are polynomial basis functions, i.e. X ij = S j i and S i ∈ [−1, 1] uniformly i.i.d. We also show comparably bad predictive behavior for various versions of Bayesian ridge regression, involving just a single, high-but-finite dimensional model. In that case Bayes eventually recovers and concentrates onP , but only at a sample size that is incomparably larger than what can be expected if the model is correct.
These findings contradict the folk wisdom that, if the model is incorrect, then "Bayes tends to concentrate on neighborhoods of the distribution(s) in M that is/are closest to P * in KL divergence." Indeed, the strongest actual theorems to this end that we know of, (Kleijn and van der Vaart, 2006, De Blasi and Walker, 2013, hold, as the authors emphasize, under regularity conditions that are substantially stronger than those needed for consistency when the model is correct (as by e.g. Ghosal et al. (2000) or Zhang (2006a)), and our example shows that consistency may fail to hold even in relatively simple problems.
The Solution: Generalized Posterior and Safe Bayes Bayesian updating can be enhanced with a learning rate η, an idea put forward independently by several authors (Vovk, 1990, McAllester, 2003, Barron and Cover, 1991, Walker and Hjort, 2002, Zhang, 2006a) and suggested as a tool for dealing with misspecification by Grünwald (2011Grünwald ( , 2012. η trades off the relative weight of the prior and the likelihood in determining the η-generalized posterior, where η = 1 corresponds to standard Bayes and η = 0 means that the posterior always remains equal to the prior. When choosing the 'right' η, which in our case is significantly smaller than 1 but of course not 0, η-generalized Bayes becomes competitive again. In general,  . . , M 50 with polynomial basis functions, given 100 data points sampled i.i.d. ∼ P * (about 50 of which are at (0, 0)). Standard Bayes overfits, not as dramatically as maximum likelihood/unpenalized least squares, but still enough to show dismal predictive behavior as in Figure 2. In contrast, Safe Bayes (which chooses learning rate η ≈ 0.4 here) and standard Bayes trained only at the points for which the model is correct (not (0, 0)) both perform very well. Figure 2: The expected squared error risk obtained when predicting by the full Bayesian posterior (brown curve) and the Safe Bayesian posterior (red curve) and the optimal predictions (gray curve) as a function of sample size, for the setting of Figure 1. SafeBayes is the R-log-version of SafeBayes defined in Section 4. 2. Precise definitions and further explanation in Section 5.1 and Section 5. 2. the optimal η depends on the underlying ground truth P * , and the problem has always been how to determine the optimal η empirically, from the data.
Recently, Grünwald (2012) proposed the Safe Bayesian algorithm for learning η, and theoretically showed that it achieves good convergence rates in terms of KL divergence on a variety of problems. Here we show empirically that Safe Bayes performs excellently in our regression setting, being competitive with standard Bayes if the model is correct and very significantly outperforming not just standard Bayes, but also cross-validation and approaches such as AIC when the model is incorrect. We do this by providing a wide range of experiments, varying parameters of the problem such as the priors and the true regression function and studying various performance indicators such as the squared error risk, the posterior on the variance etc.
We note that a Bayesian's (and our) first instinct would be to learn η itself in a Bayesian manner instead. Yet this does not solve the problem, as we show in Section 5. 4, where we consider a setting in which 1/η turns out to be exactly equivalent to the λ regularization parameter in the Bayesian Lasso and ridge regression approaches. We find that selecting η by (empirical) Bayes, as suggested by e.g. Park and Casella (2008), does not nearly regularize enough in our misspecification experiments. In the Bayesian ridge regression setting with fixed variance, the Safe Bayesian algorithm becomes very similar to learning λ by cross-validation with squared-error loss, as is standard in frequentist ridge regression (cross-validation with a logarithmic score does not work however). In the varying variance case, there is no such straightforward interpretation of SafeBayes.
The Type of Misspecification The models are misspecified in that they make the standard assumption of homoskedasticity -σ 2 is independent of X -whereas in reality, under P * , there is heteroskedasticity, there being a region of X with low and a region with (relatively) high variance. Specifically, in our simplest experiment the 'true' P * is defined as follows: at each i, toss a fair coin. If the coin lands heads, then sample X i from a uniform distribution on [−1, 1], and set Y i = 0 + i , where i ∼ N (0, σ 2 0 ). If the coin lands tails, then set (X i , Y i ) = (0, 0), so that there is no variance at all. The 'best' conditional densityP , closest to P * (Y | X) in KL divergence, representing the true regression function Y = 0 and reliable in the sense of Section 2.3, is then given by (1) with all β's set to 0 andσ 2 = σ 2 0 /2. In a typical sample of length n, we will thus have approximately n/2 points with X i uniform and Y i normal with mean 0, and approximately n/2 points with (X i , Y i ) = (0, 0). These points seem 'easy' since they lie exactly on the regression function one would hope to learn; but they really wreak severe havoc.
The In-Liers Cause the Problem While it is well-known that in the presence of outliers, Gaussian assumptions on the noise lead to problems, both for frequentist and Bayesian procedures, in the present problem we have in-liers rather than outliers. Also, if we slightly modify the setup so that homoskedasticity holds, standard Bayes starts behaving excellently, as again depicted in Figure 1 and 2. Finally, while the figure shows what happens for polynomials, we used independent multivariate X's rather than nonlinear basis functions in the main experiments below, getting essentially the same results. All this indicates that the inconsistency is really caused by misspecification, in particular the presence of in-liers, and not by anything else. The setup is inspired by the work of Langford (2004, 2007), who gave a mathematical proof that Bayesian inference can be inconsistent under misspec-ification in a related but much more artificial classification setting. Here we show that this can also happen in a much more natural regression setting. The setting being more natural, it is also harder to analyze, and we only demonstrate the inconsistency empirically.

Overview of this Paper
KL-Associated Inference tasks Section 2 introduces our setting and the main concepts needed to understand our results. A crucial point here is that, if Bayesian (or other likelihoodbased methods) converge at all to a distribution in the model M, this distribution (often called the 'pseudo-truth') is theP ∈ M that minimizes KL-divergence to the true distribution P * . While the minimum KL divergence point is often not of intrinsic interest, for some (not all) models,P can be of interest for other reasons as well (Royall and Tsou, 2003): there may be associated inference tasks for whichP is suitable as well. For standard linear models with fixed σ 2 , the main associated task is squared error prediction: the KL-optimalP is also optimal, among all P ∈ M, in terms of squared error prediction risk. If additionally σ 2 becomes a free parameter, then it is also reliable, which roughly means that it is optimal in determining its own squared error prediction quality (Section 2.3; we have a lot more to say about associated inference tasks in Section 7). Thus, whenever one is prepared to work with linear models and one is interested in squared risk or reliability, then Bayesian inference would seem the way to go, even if one suspects misspecification. . . at least if there is consistency.
The Safe Bayesian Algorithm Section 3 introduces the η-generalized posterior and instantiates it to the linear model. Section 4 introduces the 'Safe Bayesian' algorithm, which learns η from the data. This is done via Dawid's (1984) prequential view on Bayesian inference. We then provide four instantiations of the SafeBayes method to linear models.
Section 5 discusses our experiments. We first provide the necessary preparation in Section 5.1 and 5. 2. Section 5.3 gives the results of our first experiment, a comparison of Bayesian and SafeBayesian model averaging and selection in two settings, one with a correct model and one with a model corrupted by 50% easy points as above, but with independent Gaussian rather than polynomial inputs. Section 5. 4 repeats these experiments for a Bayesian ridge regression setting, Section 5.5 provides an 'executive summary '. In all experiments Safe Bayesian methods behave much better in terms of squared error risk and reliability than standard Bayes if the model is incorrect, and hardly worse (sometimes still better) than standard Bayes if the model is correct.

Good vs. Bad Misspecification: Nonconcentration and Hypercompression
In and of itself, the fact that one obtains inconsistency with homoskedastic models and heteroskedastic data may not be very surprising; and indeed, whether similar phenomena occur in real-world data needs further study. The main strength of our example is rather that it clearly shows what can happen in principle, and indicates how one may go about solving it. We explain this in Section 6, in particular on the basis of Figure 9 on page 34, the essential picture to understand the phenomenon. Inconsistency can only arise under a 'bad' form of misspecification, depicted by the figure. Under bad misspecification, the posterior may fail to concentrate, and this causes trouble. As a theoretical contribution of this paper, we show in this section that, under some conditions, a Bayesian strongly believes that her posterior will, in some sense, concentrate fast. Indeed, SafeBayes will only select η 1 if the stan-dard posterior is nonconcentrated, and may thus be (loosely) viewed as a particular 'prior predictive check '. Posterior nonconcentration in turn can lead to 'hypercompression', the phenomenon that the Bayes predictive distribution behaves substantially better under a logarithmic scoring rule than the best distributionP ∈ M; this can happen because the Bayes predictive distribution -a mixture of elements of M -behaves substantially differently from any of the elements of M. Somewhat paradoxically (Section 6.3), Bayes' overly good log-loss behavior is exactly what causes it to perform badly for the associated inference tasks (squared error prediction and reliability, in our case). Thus, there can be an inherent tension between behavior under log-loss and behavior under its associated tasks, a discrepancy which one can measure by the mixability gap (Section 6.4), a theoretical concept introduced by van Erven et al. (2011) and Grünwald (2012). If one is interested in log-loss, standard Bayes is just fine; the Safe Bayesian algorithm should be used if one wants to optimize behavior against the associated tasks. Of course, whether such a task-dependent modification of Bayes is desirable needs discussion, which we provide in Section 7.
Additional Experiments The paper is followed by a long list of appendices where we provide a battery of experiments to check the robustness of our results. Specifically, we investigate what happens if we vary our models and priors (using e.g. a fixed σ 2 and standard priors used in the regression literature), our methods, and if we vary the data generating distribution using e.g. 'easy' points that are close to, but not exactly (0, 0). Our main conclusion here is that, of the four versions of SafeBayes which we propose, one is uncompetitive and among the other three, there is no clear winner -although they consistently outperform Bayes under misspecification. Furthermore we show that AIC, BIC and cross-validation also have serious problems in our regression setup. We also provide a proof for the theorem about nonconcentration given in Section 6. 4. 2 Preliminaries: Setting, Optimal KL Distribution, Regression Function 2.1 Setting,Logarithmic Risk,Optimal Distribution In this paper we consider data Z n = Z 1 , Z 2 , . . . , Z n ∼ i.i.d. P * , where each Z i = (X i , Y i ) is an independently sampled copy of Z = (X, Y ), X taking values in some set X , Y taking values in Y and Z = X × Y. We are given a model M = {P θ | θ ∈ Θ} parameterized by (possibly infinite-dimensional) Θ, and consisting of conditional distributions P θ (Y | X), extended to n outcomes by independence. For simplicity we assume that all P θ have corresponding conditional densities f θ , and similarly, the conditional distribution P * (Y | X) has a conditional f * , all with respect to the same underlying measure. While we do not assume P * (Y | X) to be in (or even 'close' to) M, we want to learn, from given data Z n , a 'best' (in a sense to be defined below) element of M, or at least, a distribution on elements of M that can be used to make predictions about future data. While our experiments focus on linear regression, the discussion in this section holds for general conditional density models. The logarithmic score, henceforth abbreviated to log-loss, is defined in the standard manner: the loss incurred when predicting Y based on density f (· | x) and Y takes on value y, is given by − log f (y|x). A central quantity in our setup is then the expected log-loss or log-risk, defined as where here as in the remainder of this paper, log denotes natural logarithm. We let P * X be the marginal distribution of X under P * . The Kullback-Leibler (KL) divergence D(P * P θ ) between P * and conditional distribution P θ is defined as the expected KL-divergence, under X ∼ P * X , of the KL divergence D(P * (· | X) P θ (· | X)) between P θ and the 'true' conditional P * (Y |X): D(P * P θ ) = E X∼P * X [D(P * (·|X) P θ (·|X))]. A simple calculation shows that for any θ, θ , so that the closer P θ is to P * in terms of KL divergence, the smaller its log-risk, and the better it is, on average, when used for predicting under the log-loss. Now suppose that M contains a unique distribution that is closest, among all P ∈ M to P * in terms of KL-divergence. We denote such a distribution, if it exists, byP . ThenP = P θ for at least one θ ∈ Θ; we pick any such θ and denote it byθ, i.e.P = Pθ, and note that it also minimizes the log-risk: We shall call such aθ optimal.
Since, in regions of about equal prior density, the log Bayesian posterior density is proportional to the log likelihood ratio, we hope that, given enough data, with high P * -probability, the posterior puts most mass on distributions that are close to Pθ in KL-divergence, i.e. that have log-risk close to optimal. Indeed, all existing consistency theorems for Bayesian inference under misspecification express concentration of the posterior around Pθ.

A Special Case: The Linear Model
Note that this is as in (1) but from now on we adopt the standard convention to take X 0i ≡ 1 as a dummy random variable. We denote by M p = {P p,β,σ 2 | (p, β, σ 2 ) ∈ Θ p } the standard linear model with parameter space Θ p := {(p, β, σ 2 ) | β = (β 0 , . . . , β p ) T ∈ R p+1 , σ 2 > 0}, where the entry p in (p, β, σ 2 ) is redundant but included for notational convenience. We let Θ = p=0..pmax Θ p . M p states that for all i, (1) holds, where 1 , 2 , . . . ∼ i.i.d. N (0, σ 2 ). When working with linear models M p , we are usually interested in finding parameters β that predict well in terms of the squared error loss function (henceforth abbreviated to square-loss): the square-loss on data ( We thus want to find the distribution minimizing the expected square-loss, i.e. squared error risk (henceforth abbreviated to 'square-risk') relative to the underlying P * : where Since this quantity is independent of the variance σ 2 , σ 2 is not used as an argument of risk sq .

KL-Associated Prediction Tasks for the Linear Model:
The KL-Optimal θ = (β,p,σ 2 ) is square-risk optimal and reliable Suppose that an optimalP ∈ M exists in the regression model. We denote byp the smallest p such thatP ∈ M p , and defineσ 2 ,β such thatP = Pp ,β,σ 2 . A straightforward computation shows that for all (p, β, σ 2 ) ∈ Θ: so that the (p, β) achieving minimum log-risk for each fixed σ 2 is equal to the (p, β) with the minimum square-risk. In particular, (p,β,σ 2 ) must minimize not just log-risk, but also square-risk. Moreover, the conditional expectation E P * [Y | X] is known as the true regression function. It minimizes the square-risk among all conditional distributions for Y | X. Together with (4) (p, β) represents the true regression function, then (p,β) also represents the true regression function. In all our examples, this will be the case: the model is misspecified only in that the true noise is heteroskedastic; but the model does invariably contain the true regression function.
Moreover, for each fixed (p, β), the σ 2 minimizing risk log is, as follows by differentiation, given by σ 2 = risk sq (p, β). In particular, this implies that or in words: the KL-optimal model varianceσ 2 is equal to the true expected (marginal, not conditioned on X) square-risk obtained if one predicts with the optimal (p,β). This means that the optimal (p,β,σ 2 ) is reliable in the sense of Grünwald (1998Grünwald ( , 1999: its self-assessment about its square-loss performance is correct, independently of whetherβ is equal to the true regression function or not: (p,β,σ 2 ) correctly predicts how well it predicts. Summarizing, for misspecified models, (p,β,σ 2 ) is optimal not just in KL/log-risk sense, but also in terms of square-risk and in terms of reliability; in our examples, it also represents the true regression function. We say that, for linear models, square-risk optimality, squarerisk reliability and regression-function consistency are KL-associated prediction tasks: if (as we hope Bayes will do, but as we will see sometimes won't) we can find the KL-optimalθ, we automatically behave well in these associated tasks as well.

The Generalized Posterior
General Losses The original generalized posterior is a notion going back at least to Vovk (1990) and has been developed mainly within the so-called (frequentist) PAC-Bayesian framework McAllester (2003), Seeger (2002), Catoni (2007), Audibert (2004), Zhang (2006b); see also Bissiri et al. (2013) and the discussion in Section 7. It is defined relative to a prior on predictors rather than probability distributions. Depending on the decision problem at hand, predictors can be e.g. classifiers, regression functions or probability densities. Formally, we are given an abstract space of predictors represented by a set Θ, which obtains its meaning in terms of a loss function : Z × Θ → R, writing θ (z) as shorthand for (z, θ). Following e.g. Zhang (2006b), for any prior Π on Θ with density π relative to some underlying measure ρ, we define the generalized Bayesian posterior with learning rate η relative to loss function , denoted as Π | Z n , η, as the distribution on Θ with density Thus, if θ 1 fits the data better than θ 2 by a difference of according to loss function , then their posterior ratio is larger than their prior ratio by an amount exponential in , where the larger η, the larger the influence of the data as compared to the prior. . . , x ip ), and the goal is to predict y i given x i , then we may take as our prediction model e.g. the set of linear predictors that predict y i by β j x ij = x i β, and as our loss function the squared error loss, β (x i , y i ) = (y i − x i β) 2 . We may then study the behavior of such a procedure in its own right, irrespective of a Bayesian misspecification interpretation; the experiments we perform in Appendix A.1 and A. 1.2 can be interpreted in this manner.

Log-Loss and Likelihood
Now if the set Θ represents a model of (conditional) distributions M = {P θ | θ ∈ Θ}, we may set, for z i = (x i , y i ), θ (z i ) = − log f θ (y i | x i ) to be the log-loss as defined above. In this special case, the definition of η-generalized posterior specializes to the definition of 'generalized posterior' as known within the Bayesian literature Hjort, 2002, Zhang, 2006a): Again, the larger η, the larger the influence of the likelihood. Obviously η = 1 corresponds to standard Bayesian inference, whereas if η = 0 the posterior is equal to the prior and nothing is ever learned. Our algorithm for learning η will usually end up with values in between. It has long been known that in model selection and nonparametric settings, there is an issue with consistency proofs for full Bayes, Bayes MAP and MDL if we take the standard η = 1, and indeed, this is part of the reason why the generalized posterior in the form (7) was derived in the first place: for example, Barron and Cover (1991) give general consistency theorems for 2-part MDL (closely related to Bayes MAP) and note that they hold for any η < 1; but for η = 1, additional assumptions must be made. Zhang (2006a) gives an explicit example in which the posterior shows anomalous behavior at η = 1. A connection to misspecification was first made by Grünwald (2011) (see Section 7.1) and Grünwald (2012).

Generalized Predictive Distribution
We also define the predictive distribution based on the η-generalized posterior (7) as a generalization of the standard definition as follows: where the first equality is a definition and the second follows by our i.i.d. assumption. We always use the bar-notationf to indicate marginal and predictive distributions, i.e. distributions on data that are arrived at by integrating out parameters. If η = 1 thenf and π become the standard Bayesian predictive density and posterior, and if it is clear from the context that we consider η = 1, we leave out the η in the notation.
The generalized posterior is created by exponentiating the likelihood according to individual elements θ ∈ Θ = p Θ p in the model and renormalizing, which is not the same as exponentiating marginal likelihoods and renormalizing. In particular, π(p | z n , η) as given by (10) is in general not proportional to (f (y n | x n , p)) η π(p). Similarly, for generalized marginal distributions, as soon as η = 1, we have that in general unlike for the standard Bayesian marginal distribution for which equality holds (in Section 6.5 we encounter a further modification of the generalized posterior whose marginals do satisfy this product rule).

Instantiation to Linear Model Selection and Averaging
Now consider again a linear model M p as defined in Section 2. 3. We instantiate the generalized posterior and its marginals for this model. With prior π(β, σ 2 | p) taken relative to Lebesgue measure, (7) specializes to: Note that in the numerator 1/σ 2 and η are interchangeable in the exponent, but not in the factor in front: their role is subtly different. For Bayesian inference with a sequence of models M = p=0..pmax M p , with π(p) a probability mass function on p ∈ {0, . . . , p max }, we get: The total generalized posterior probability of model M p then becomes: π(p | z n , η) = π(β, σ, p | z n , η) dβ dσ.
Analogously to (8), for given p, we define (writing a j i as shorthand for a i , . . . , a j ), the ηgeneralized Bayesian predictive distribution as: The previous displays held for general priors. The experiments in this paper adopt widely used priors (see e.g. Raftery et al. (1997)): normal priors on the β's and inverse gamma priors on the variance. These conjugate priors allow explicit analytical formulas for all relevant quantities for arbitrary η, provided below. We only consider the simple case of a fixed M p here; the more complicated formulas with an additional prior on p are given in Appendix D.
Fixed p, varying σ 2 Now consider linear models with a Gaussian prior on β conditional on σ 2 as above, and a conjugate (inverse gamma) prior on σ 2 , i.e. π(σ 2 ) = Inv-gamma(σ 2 | a 0 , b 0 ) for some a 0 and b 0 . Here we use the following parameterization of the inverse gamma distribution: The posterior π(σ 2 , z n , p) is then given by Inv-gamma(σ 2 | a n,η , b n,η ) where a n,η = a 0 + ηn/2 ; b n, The posterior expectation of σ 2 can be calculated as Note that the posterior mean of β given σ 2 does not depend on σ 2 .

Introducing Safe Bayes via the Prequential View
We introduce SafeBayes via Dawid's prequential interpretation of Bayes factor model selection. As was first noticed by Dawid (1984) and Rissanen (1984), we can think of Bayes factor model selection as picking the model with index p that, when used for sequential prediction with a logarithmic scoring rule, minimizes the cumulative loss. To see this, note that for any distribution whatsoever, we have that, by definition of conditional probability, In particular, for the standard Bayesian marginal distributionf (· | p) =f (· | p, η = 1) as defined above, for each fixed p, we have where the second equality holds by (11). If we assume a uniform prior on model index p, then Bayes factor model selection picks the model maximizing π(p | z n ), which by Bayes' theorem coincides with the model minimizing (16), i.e. minimizing cumulative log-loss. Similarly, in 'empirical Bayes' approaches, one picks the value of some nuisance parameter ρ that maximizes the marginal Bayesian probabilityf (y n | x n , ρ) of the data. By (16), which still holds with p replaced by ρ, this is again equivalent to the ρ minimizing the cumulative logloss. This is the prequential interpretation of Bayes factor model selection and empirical Bayes approaches, showing that Bayesian inference can be interpreted as a sort of forward (rather than cross-) validation (Dawid, 1984, Rissanen, 1984, Hjorth, 1982. We will now see whether we can use this approach with ρ in the role of the η for the η-generalized posterior that we want to learn from the data. We continue to rewrite (16) as follows (with ρ instead of p that can either stand for a continuous-valued parameter or for a model index but not yet for η), using the fact that the Bayes predictive distribution given ρ and z i−1 can be rewritten as a posterior-weighted average of f θ : This choice forρ being entirely consistent with the Bayesian approach, our first idea is to chooseη in the same way: we simply pick the η achieving (17), with ρ substituted by η. However as Figure 13 will show (the blue line there depicts (17) for one of our experiments), this will tend to pick η close to 1 and does not improve predictions under misspecification. Indeed, we introduced η to deal with the case in which the Bayesian model assumptions are violated, so we cannot expect that learning it in a Bayes-like way such as (17) will resolve the issue. But it turns out that a slight modification of (17) does the trick: we simply interchange the order of logarithm and expectation in (17) and pick the η minimizing In words, we pick the η minimizing the P osterior-E xpected P osterior-Randomized log-loss, i.e. the log-loss we expect to obtain, according to the η-generalized posterior, if we actually sample from this posterior. This modified loss function has also been called Gibbs error (Cuong et al., 2013), and while the abbreviation PEPR-log-loss would be more correct, we simply call it the η-R-log-loss from now on. A detailed explanation of why this works will have to wait until Section 6.3 and 6.4; for now we just notice that by Jensen's inequality, for any fixed η, for every sequence of data we must have yet, the difference between both sides is small if the posterior is concentrated for (x i , y i ), i.e. for small and small positive δ, it puts 1 − δ of its mass on distributions which assign the same density to y i given x i up to a factor 1 + -clearly, if δ = = 0 then both sides are the same. Thus, at values for η at which the generalized posterior is 'cumulatively concentrated', i.e. concentrated at most sample points, the objective function will be similar to the standard Bayesian one. This is the clue to further analysis of the algorithm to follow later. In practice, it is computationally infeasible to try all values of η and we simply have to try out a number of values. For convenience we give a detailed description of the resulting algorithm below, copied from Grünwald (2012). In this paper, we will invariably apply it with z i = (x i , y i ) as before, and θ (z i ) set to the (conditional) log-loss as defined before, although it sometimes also has a second interpretation with θ as square-loss.
Calculate "posterior-expected posterior-randomized loss" of predicting actual next outcome: s η := s η + r; end end Chooseη := arg min η∈Sn {s η } (if min achieved for several η ∈ S n , pick largest); where Π should be chosen such that the resulting log-loss is as small as possible. In (18) we set Π = Π, but Π is allowed to be any distribution on θ under which the expected log-loss is small. The heuristic analysis of Section 6.4 suggests that the smaller the loss that can be formed this way (see also under 'Open Problems' in Section 7), the better the resulting method is expected to work. Now by Jensen's inequality, the η-in-model-log-loss or just η-I-log-loss, defined as, is always smaller than (18) for the linear models that we consider. This means that, instead of finding the η minimizing (18), we may want to find the η minimizing (22), which is of the form (21) with Π equal to a point mass onθ i,η := E θ∼Π|z i−1 ,η f θ . We call the version of SafeBayes which minimizes the alternative objective function (22) in-model SafeBayes, abbreviated to I-SafeBayes, and from now on use R-SafeBayes for the original version based on the R-log-loss. We did not realize the potential benefits of using in-model SafeBayes at the time of writing Grünwald (2012), and while the theoretical results of Grünwald (2012) can be adjusted to deal with such modifications, we cannot get any better theoretical convergence bounds as yet, but this may be an artifact of our proof techniques. A secondary goal of the experiments in this paper is thus to see whether one can really improve SafeBayes by using the 'in-model' version.

Instantiating SafeBayes to the Linear Model
Our experiments concern four instantiations of SafeBayes: R-SafeBayes and I-SafeBayes for models with fixed variance, denoted R-square-SafeBayes and I-square-SafeBayes for reasons that will become clear below, are the topic of experiments in Appendix A.1 and A. 1.2. The main text instead investigates, in Section 5, R-SafeBayes and I-SafeBayes for models with varying variance, denoted R-log-SafeBayes and I-log-SafeBayes. Below we give explicit formulas for each when conditioned on a fixed model M p ; the case with a posterior on p itself can easily be derived from these.
Fixed σ 2 : R-square-and I-square-SafeBayes When conditioned on a fixed p and σ 2 (a situation with which we experiment in Section A. 1.2), SafeBayes tries to minimize the R-log-loss, which, as an easy calculation shows, is just the sum, from i = 0 to n − 1, of whereβ i,η and Σ i,η are given as in and below (12). Note thatβ i,η depends on η but not on σ, and note also that, since X T n X n (as in (12)) tends to increase linearly in n and p, the final term is of order p/(nη).
In the corresponding in-model version of SafeBayes, we use the in-model-loss as given by − log f (y i+1 | x i+1 ,β i,η , σ 2 ), which is equal to (23) without the final term. Since the first term of (23) does not depend on the data, this version of SafeBayes thus amounts to picking theη minimizing just the sum of square-loss prediction errors, which does not depend on the chosen σ 2 . It thus becomes a standard version of 'prequential model selection' as based on the square-loss, which in turn is similar to (though having different asymptotics than) leave-one-out cross validation based on the square-loss.
Indeed, the fixed σ 2 versions of SafeBayes can be interpreted in two ways: first, as we did until now, in terms of SafeBayes with θ in (20) set to the log-loss, i.e. as a tool for dealing with misspecification. Second, with θ in (20) set proportionally to the square-loss, as a generic tool to learn good square-loss predictors (not distributions) in a pseudo-Bayesian way. More precisely, I-SafeBayes with the log-loss for fixed σ 2 is equivalent to the version of I-SafeBayes we would get if we set β,σ 2 (x, y) := C(y − xβ) 2 , for any constant C > 0. Similarly, R-SafeBayes with the log-loss for fixed σ 2 is equivalent to the version of R-SafeBayes we would get if we set β,σ 2 (x, y) := C(y − xβ) 2 , although now equivalence only holds if we set C = 1/2σ 2 . For this reason we will now refer to them as I-square-SafeBayes and R-square-SafeBayes, respectively.
Varying σ 2 : R-log-and I-log-SafeBayes Next consider the situation with fixed p and varying σ 2 , with posterior on σ 2 an inverse gamma distribution with parameters a n,η and b n,η as given by (14). Then the R-log-loss is given by where ψ is the digamma function,σ 2 i,η is the η-posterior expectation of σ 2 as given by (15) and r(i, η) is a remainder function which is O(1/i) whenever n i=1 (y i − x i β n,η ) 2 increases linearly in i. This final approximation follows by (15) and because we have R-SafeBayes for varying σ 2 minimizes (24), and, because there is now only a log-loss and not a direct square-loss interpretation, we will call it R-log-SafeBayes from now on.
To calculate the corresponding in-model version of SafeBayes, I-log-SafeBayes, note that it minimizes the sum of Comparing the four versions of SafeBayes, we see that the both R-SafeBayeses have an additional term which decreases in η, increases in model dimensionality p (via the size of the matrix Σ i,η ) but becomes negligible for n p.

SafeBayes learns to predict as well as the Optimal Distribution
We first define the Cesàro-averaged posterior given data Z n by setting, for any subset Θ ⊂ Θ, to be the posterior probability of Θ averaged over the n posterior distributions obtained so far. Predicting based on Cesàro-averaged posteriors was introduced independently by several authors (Barron, 1987, Helmbold and Warmuth, 1992, Yang, 2000, Catoni, 1997 and has received a lot of attention in the machine learning literature in recent years, also under the name "on-line to batch conversion of Bayes" or progressive mixture rule (Audibert, 2007) or mirror averaging (Juditsky et al., 2008, Dalalyan andTsybakov, 2012), but is of course unnatural from a Bayesian perspective.
The main result of Grünwald (2012) essentially states the following: suppose that, under P * , the density ratios are uniformly bounded, i.e. there is a finite v such that for all θ, θ ∈ Θ, Suppose further that the prior Π assigns 'sufficient mass' in KL-neighborhoods of Pθ. Then Π Ces applied with theη learned by the Safe Bayesian algorithm concentrates on the optimal Pθ. That is, let Θ δ be the subset of all θ ∈ Θ with D(P * P θ ) ≥ D(P * Pθ) + δ. Then for all δ > 0, with P * -probability 1, as n → ∞, we have that Π Ces (Θ δ | Z n ,η) goes to 0. Grünwald (2012) goes on to show that in several settings, one can design priors such that the rate at which the posterior concentrates is minimax optimal, i.e. no algorithm can do better in general. On the negative side, the requirement of bounded density ratio is strong, and the replacement of the standard posterior by the Cesàro one is awkward. On the positive side, the theorem has no further conditions and can be applied to parametric and nonparametric cases alike.
In recent, as yet unpublished work, Grünwald (2014) extends the result to deal with unbounded density ratios as in the regression setting considered here, and to the 'standard' η-generalized rather than the Cesàro-averaged η-generalized posterior. In both cases, convergence can still be proven but the bounds given on the concentration rate worsen by a log n factor. We suspect that in many situations, this is an artifact of the proof technique, and to see whether there is any practical difference, below we include experimental results both for the Cesàro-averaged η-generalized posterior Π Ces (· | Z n ,η) and for the standard η-generalized posterior Π(· | Z n ,η).

Main Experiment: Varying σ 2
In this section we provide our main experimental results, based on linear models M p as defined in Section 2.2 with a prior on both the mean and the variance. Figure 3-6 depict, and Section 5.3 discusses the results of model selection/averaging experiments, which choose/average between the models 0, . . . , p max , where we consider first an incorrectly and then a correctly specified model, both with p max = 50 and later with p max = 100. Section 5.4 contains and interprets additional experiments on Bayesian ridge regression, with a fixed p; a multitude of additional experiments is provided in the appendices. Section 5.5 in this section summarizes the relevant findings of these additional experiments.

Preparing Main Experiments: Model, Priors, Method, 'Truth'
In this subsection we prepare the experiments: Section 5.1.1 describes our priors π; Section 5. 1.2 concerns the sampling ('true') distributions P * with which we experiment; and finally, Section 5.2 describes the data statistics that we will report.

The Priors
Prior on Models In our model selection/averaging experiments, we use a fat-tailed prior on the models given by This prior was chosen because it remains well-defined for an infinite collection of models, even though we only use finitely many in our experiments.
Variation. As a sanity check we did repeat some of our experiments with a uniform prior on 0, . . . , p max instead; the results were indistinguishable.
Prior on Parameters given Models Each model M p has parameters β, σ 2 , on which we put the standard conjugate priors as described in Section 3.1. We set the mean of the prior on β toβ 0 = 0, and its covariance matrix to σ 2 Σ 0 . Our main experiments below are based on an informative instantiation of Σ 0 , using the identity matrix Σ 0 = I p+1 ; this prior equals the posterior we would get by starting with an improper Jeffreys' prior on β and then observing, for each coefficient β j , one extra point z = (x, 0) with x j = 1 and x i = 0 for i = j.
Variations We also ran experiments with a 'slightly informative' Σ 0 , where we set Σ 0 = 1000 · I p+1 , comparable to observing points z = (x, 0) with x j = 1/ √ 1000. Finally, following the standard reference Raftery et al. (1997), we also used a prior with a level of informativeness depending on the submodel, described in more detail in Appendix A.
As to the prior on σ 2 : Jeffreys' prior is obtained for the choice a 0 = b 0 = 0 in (13). We do not use this improper prior, because of the well-known issues with Bayes factors under improper priors (O'Hagan, 1995). Moreover, to calculate the posterior's reliability (defined in Section 5.2 and shown in Figure 3) and also for the I-log-loss, we need to calculate the posterior expectation of the variance σ 2 quantity as given by (15), which is only well-defined and finite for a n > 1. We want to make π(σ 2 ) as uninformative as possible while ensuring that (for any positive learning rate) this variance exists for the posterior based on at least one sample. This is accomplished by choosing a 0 = 1: for standard Bayes, the posterior after one observation has a 1 = a 0 + 1/2; for generalized Bayes, a 1 = a 0 + η/2. To set b 0 , we use that b 0 /a 0 represents the sample variance of a virtual initial data sequence (Gelman et al., 2013, Section 14.8). We choose b 0 = 1/40 so that b 0 /a 0 = 1/40, the true variance of the noise in our data, as we describe next.

The "Truth" (Sampling Distribution)
Our experiments fall into two categories: correct-model and wrong-model experiments.

Correct-Model Experiments
Wrong-Model Experiments Now at each time point i, a fair coin is tossed independently of everything else. If the coin lands heads, then the point is 'easy', and (X i , Y i ) := (0, 0). If the coin lands tails, then X i is generated as for the correct model, and Y i is generated as (27), but now the noise random variables have variance σ 2 0 = 2σ * 2 = 1/20. Thus, Z i = (X i , Y i ) is generated as in the true model case but with a larger variance; this larger variance has been chosen so that the marginal variance of each Y i is the same value σ * 2 in both experiments.
From the results in Section 2.3 we immediately see that, for both experiments, the optimal model is Mp forp = 4, and the optimal distribution in M and Mp is parameterized bỹ ; in the wrong model experiment, sinceσ 2 must be reliable, it must be equal to the square-risk obtained with (p,β), which is ( Variations. We have already seen a variation of Experiments 1 and 2 depicted in Figure 1 and 2. In the correct model version of that experiment, P * is defined by setting X j = S j , and let S be uniformly distributed on [−1, 1] and set Y = 0 + , where ∼ N (0, σ * 2 ), with σ * 2 = 1/40; (X 1 , Y 1 ), . . . are then sampled as i.i.d. copies of (X, Y ). Note that the true regression function is 0 here. In Appendix C we briefly consider this and several other variations of these ground truths.  Figure 5 is about wrong-model, p = 100 and Figure 6 depicts the correctmodel, p = 100 setting. For all four experiments we measure three aspects of the performance of Bayes and SafeBayes, each summarized in a separate graph. First, we show the behavior of several prediction methods based on Safe Bayes relative to square-risk; second, we measure whether the methods provide a good assessment of their own predictive capabilities in terms of square-loss, i.e. whether they are reliable and not 'overconfident'. Third, we check a form of model identification consistency. Below we explain these three performance measures in detail. We postpone all experiments with log-loss rather than square-loss to Section 6. 4. We also provide a fourth graph in each case indicating whatη's are typically selected by the two versions of SafeBayes.

The Statistics We Report
Square-Risk For a given distribution W on (p, β, σ 2 ), the regression function based on W , a function mapping covariate X to R, If we take W to be the η-generalized posterior, then (28) is also simply called the η-posterior regression function. The square-risk relative to P * based on predicting by W is then defined as an extension of (3) as In the experiments below we measure the square-risk relative to P * at sample size i − 1 achieved by, respectively, (1), the η-generalized posterior, (2), the η-generalized posterior conditioned on the MAP (maximum a posteriori) model, and, (3), the η-generalized Cesàroaveraged posteriors, i.e. (10), and Π Ces is the Cesàro-averaged posterior as defined as in (26). We do this for three values of η: (a) η = 1, corresponding to the standard Bayesian posterior, (b), η :=η(Z i−1 ) set by the Rlog Safe Bayesian algorithm run on the past data Z i−1 , and (c) η set by the I-log Safe Bayesian algorithm. In the figures of Section 5.3, 1(a) is abbreviated to is I-log-SafeBayes MAP, and results with Cesàro-averaging are discussed but not explicitly shown. In Section 5.4, additionally 3(a) is Bayes Cesàro, 3(b) is R-log-SafeBayes Cesàro, and 3(c) is I-log-SafeBayes Cesàro.
Concerning the three square-risks that we record: The first choice is the most natural, corresponding to the prediction (regression function) according to the 'standard' η-generalized posterior; the second corresponds to the situation where one first selects a single submodel p map and then bases all predictions on that model; it has been included because such methods are often adopted in practice. The third choice, the Cesàro-averaged generalized posterior is included because, when η =η is set by Safe Bayes, this is the choice that Grünwald (2012) provides theoretical convergence results for (as we discussed, Grünwald (2014) provides results for the non-averaged η-generalized posterior as well, but these are worse by a log-factor). But we are also interested in the results for the Cesàro-average for η = 1, because this has been proposed earlier -albeit somewhat implicitly and with different models -to stabilize Bayesian predictions in adversarial circumstances (Helmbold and Warmuth, 1992), so we include these as well.
In Figure 3 and subsequent figures below, we depict these quantities by sequentially sampling data Z 1 , Z 2 , . . . , Z max i.i.d. from a P * as defined above in Section 5. 1.2, where max is some large number. At each i, after the first i − 1 points Z i−1 have been sampled, we compute the three square-risks given above. We repeat the whole procedure a number of times (called 'runs'); the graphs show the average risks over these runs.

MAP-model identification/Occam's Razor
When the goal of inference is model identification, 'consistency' of a method is often defined as its ability to identify the smallest model Mp containing the 'pseudo-truth' (β,σ 2 ). To see whether standard Bayes and/or Safe Bayes are consistent in this sense, we check whether the MAP modelp map(Z i−1 ,η) is equal tõ p.
Reliability vs. Overconfidence Does Bayes learn how good it is in terms of squared error? To answer this question, we define, for a predictive distribution W as in (29) This is the error we make if we predict Y i using the regression function based on prediction method W . In the graphs in the next sections we plot the self-confidence ratio ] as a function of i for the three prediction methods/choices of W defined above. We may think of this as the ratio between the actual expected prediction error (measured in square-loss) one gets by using a predictor who based predictions on W and the marginal (averaged over X) subjectively expected prediction error by this predictor. We previously, in Section 2.3, showed that the KL-optimal (p,β,σ 2 ) is reliable: this means that, if we would take W the point mass on (p,β,σ 2 ) and thus, irrespective j=0β j X ij , then the ratio would be 1. For the W learned from data considered above, a value larger than 1 indicates that W does not implement a 'reliable' method in the sense of Section 2.3, but rather overconfident: it predicts its predictions to be better than they actually are, in terms of square-risk.

Main Model Selection/Averaging Experiment
We run the Safe Bayesian algorithm of Section 4 with z i = (x i , y i ) and θ (z i ) = − log f θ (y i | x i ) is the (conditional) log-loss as described in that section. As to the parameters of the algorithm (page 13), in all experiments we set the step-size κ step = 1/3 and κ max := 8, i.e. we tried the following values of η: 1, 2 −1/3 , 2 −2/3 , . . . , 2 −8 . The result of the wrong-model and correctmodel experiment as described above with p max = 50 and p max = 100, respectively, are given in Figure 3-6. Conclusion 2: if (and only if ) model incorrect, then the higher p max , the worse Bayes gets We see from Figure 4 and 6 that standard Bayes behaves excellently in terms of all quality measures (square-risk, MAP model identification and reliability) when the model is correct, both if p max = 50 and if p max = 100, the behavior at p max = 100 being essentially indistinguishable from the case with p max = 50. These and other (unreported) experiments strongly suggests that, when the data are sampled from a low-dimensional model, then, when the model is correct, standard Bayes is unaffected (does not get confused) by adding additional high-dimensional models to the model space. Indeed, the same is suggested by various existing Bayesian consistency theorems, such as those by Doob (1949), Ghosal et al. (2000), Zhang (2006a).
At the same time, from Figure 3 and 5 we infer that standard Bayes behaves very badly in all three quality measures in our (admittedly very 'evilly chosen') model-wrong experiment. Eventually, at very large sample sizes, Bayes recovers, but the main point here to notice is that the n at which a given level of recovery (measured in, say, square-loss) takes place is much higher for the case p max = 100 ( Figure 5) than for the case p max = 50 ( Figure 3). This strongly suggests that, when the model is incorrect but the best approximation lies in a low-dimensional submodel, then standard Bayes gets confused by adding additional highdimensional models to the model space -recovery takes place at a sample size that increases with p max . Indeed, the graphs strongly suggest that in the case that p max = ∞ (with which we cannot experiment), Bayes will be inconsistent in the sense that the risk of the posterior predictive will never ever reach the risk attainable with the best submodel. Grünwald and Langford (2007) showed that this can indeed happen with a simple, but much more unnatural classification model; the present result indicates (but does not prove) that it can happen with our standard model as well.
Conclusion 3: R-log-SafeBayes and I-log-SafeBayes generally perform well Comparing the four graphs for SafeBayes and I-log-SafeBayes, we see that they behave quite well for both the model-correct and the model-wrong experiments, being slightly worse than, though still competitive to, standard Bayes when the model is correct and incomparably better when the model is wrong. Indeed, in the wrong-model experiments, about half of the data points are identical and therefore do not provide very much information, so one would expect that if a 'good' method achieves a given level of square-risk at sample size n in the correct-model experiment, it achieves the same level at about 2n in the incorrect-model experiment, and this is indeed what happens. Also, we see from comparing Figure 5 and 6 on the one hand to Figure 3 and 4 on the other that adding additional high-dimensional models to the model space hardly affects the results -like standard Bayes when the model is correct, SafeBayes does not get confused by the additional, larger model space.

Secondary Conclusions
We see that both types of SafeBayes converge quickly to the right (pseudo-true) model order, which is pleasing since they were not specifically designed to achieve this. Whether this is an artifact of our setting or holds more generally would, of course, require further experimentation. We note that at small sample sizes, when both types of SafeBayes still tend to select an overly simple model, I-log-SafeBayes has significantly more variability in the model chosen-on-average; it is not clear though whether this is 'good' or 'bad'. We also note that the η's chosen by both versions are very similar for all but the smallest sample sizes, and are consistently smaller than 1. When instead of the full ηgeneralized posteriors, the η-generalized posterior conditioned on the MAPp map is used, the behavior of all method consistently deteriorates a little, but never by much.
For lack of space in the graphs, we did not show the Cesàro-versions of Bayes, R-log-SafeBayes and I-log-SafeBayes (methods 3(a), 3(b), 3(c) in Section 5.2). Briefly, the curves look as follows: Cesàro-Bayes performs significantly better than standard Bayes in all three quality measures in the wrong-model experiments, but is still far from competitive with the two (full-posterior) SafeBayes versions. When Cesàroified, the SafeBayes methods become a bit smoother but not necessarily better. Very similar behavior of Cesàro (making bad methods significantly better but still not competitive, and good methods smoother, sometimes a bit worse and sometimes a bit better) has been explicitly depicted in the ridge regression with varying σ 2 in Section 5.4 below. Figure 3: Four graphs showing respectively the square-risk, MAP model order, overconfidence (lack of reliability), and selectedη at each sample size, each averaged over 30 runs, for the wrong-model experiment with p max = 50, for the methods indicated in Section 5. 2. For the selected-η graph, the pale lines are one standard deviation apart from the average; all lines in this graph were computed overη indices (so that the lines depict the geometric mean over the values ofη themselves).  Figure 3 for the correct-model experiment with p max = 50.   Figure 3 for the correct-model experiment with p max = 100.

Second Experiment: Ridge Regression, Varying σ 2
We repeat the model-wrong and model-correct experiment of Figure 3 and 4, with just one major difference: all posteriors are conditioned on p := p max = 50. Thus, we effectively consider just a fixed, high-dimensional model, whereas the best approximationθ = (50,β,σ 2 ) viewed as an element of M p is 'sparse' in that it has only β 1 , . . . , β 4 not equal to 0. We note that the MAP model index graphs of Figure 3 and 4 are meaningless in this context (they would be equal to the constant 50) so they are left out of the new Figure 7 and 8.
Instantiating Safe Bayes Since we noticed in preliminary experiments that some versions of SafeBayes now have a tendency to select much smaller values of η than in the previous experiments, we now set κ max = 16 (large enough so that in no experiment the optimal η < 2 −κmax ); for computational reasons we also increased the step size and set κ step = 1. (12) we see that the posterior mean parameterβ i,η is equal to the posterior MAP parameter and depends on η but not on σ 2 , since σ 2 enters the prior in the same way as the likelihood. Therefore, the square-loss obtained when using the generalized posterior for prediction is always given by (y i −x iβi,η ) 2 irrespective of whether we use the posterior mean, or MAP, or the value of σ 2 . Interestingly, if we fix some λ and perform standard (nongeneralized) Bayes with a modified prior, proportional to the original prior raised to the power λ := η −1 , then the prior becomes normal N (β 0 , σ 2 Σ 0 ) where Σ 0 = ηΣ 0 and the standard posterior given z i is then (by (12)) Gaussian with mean

Connection to Bayesian (B)ridge Regression From
Thus we see that in this special case, the (square-risk of the) η-generalized Bayes posterior mean coincides with the (square-risk of) the standard Bayes posterior mean with prior N (β 0 , σ 2 ηΣ 0 ). But this means that the square-loss obtained by η-generalized Bayes on a data sequence is precisely equal to the square-loss obtained by Bayesian ridge regression with penalty parameter λ = η −1 , as defined, by, e.g., Park and Casella (2008) (to be precise, they call this method Bayesian 'Bridge' Regression with q = 2; the choice of q = 1 in their formula gives their celebrated 'Bayesian Lasso'). It is thus of interest to see what happens if η (equivalently, λ) is determined by empirical Bayes, which is one of the methods Park and Casella (2008) suggest. In addition to the graphs discussed earlier in Section 5.2, we thus also show the results for η set in this alternative way. Whereas this empirical-Bayesian ridge regression is usually a very competitive method (indeed in our model-correct experiment, Figure 8, it performs best in al respects), we will see in Figure 7 (the green line) that, just like other versions of Bayes, it breaks down under our type of misspecification. We hasten to add that the correspondence between the η-generalized posterior means and the standard posterior means with prior raised to power 1/η only holds for theβ i,η parameters. It does not hold for theσ 2 i,η parameters, and thus, for fixed η, the overconfidence of both methods may be quite different.
Conclusions for Model-Wrong Experiment For most curves, the overall picture of Figure 7 is comparable to the corresponding model averaging experiment, Figure 3: when the model is wrong, standard Bayes shows dismal performance in terms of risk and reliability up to a certain sample size and then very slowly recovers, whereas both versions of SafeBayes perform quite well even for small sample sizes. We do not show variations of the graph for p = p max = 100 (i.e. the analogue of   Figure 3-6, except for the third graph there (MAP model order), which has no meaning here. The meaning of the curves is given in Section 5.2 except for empirical Bayes, explained in Section 5. 4.
The results for the Cesàro-versions of our methods are exactly as discussed at the end of Section 5. 3.
We also see that, as we already indicated in the introduction, choosing the learning rate by empirical Bayes (thus implementing one version of Bayesian Bridge regression) behaves terribly. This complies with our general theme that, to 'save Bayes' in general misspecification problems, the parameter η cannot be chosen in a standard Bayesian manner.  Figure 7, but for the model-correct experiment conditioned on p := p max = 50.

Conclusions for Model-Correct Experiment
The model-correct experiment for ridge regression (Figure 8) offers a surprise: we had expected Bayes to perform best, and were surprised to find that the SafeBayeses obtained smaller risk. Some followup experiments (not shown here), with different true regression functions and different priors, shed more light on the situation. Consider the setting in which the coefficients of the true function are drawn randomly according to the prior. In this setting standard Bayes performs at least as good in expectation as any other method including SafeBayes (the Bayesian posterior now represents exactly what an experimenter might ideally know). SafeBayes (still in this setting) usually chooses η = 1/2 or 1/4, and the difference in risks compared to Bayes is small. On the other hand, if the true coefficients are drawn from a distribution with substantially smaller variance than a priori expected by the prior (a factor 1000 in the 'correct'-model experiment of Figure 8), then SafeBayes performs much better than Bayes. Here Bayes can no longer necessarily be expected to have the best performance (the model is correct, but the prior is "wrong"), and it is possible that a slightly reduced learning rate gives (significantly) better results. It seems that this situation, where the variance of the true function is much smaller than its prior expectation, is not exceptional: for example, Raftery et al. (1997) suggest choosing the variance of the prior in such a way that a large region of parameter values receives substantial prior mass. Following that suggestion in our experiments already gives a variance that is large enough compared to the true coefficients that SafeBayes performs better than Bayes even if the model is correct.
A Joint Observation for Model-Wrong and Model-Correct Experiment Finally we note that we see an interesting difference between the two SafeBayes versions here: Ilog-SafeBayes seems better for risk, giving a smooth decreasing curve in both experiments. R-log-SafeBayes inherits a trace of standard Bayes' bad behavior in both experiments, with a nonmonotonicity in the learning curve. On the other hand, in terms of reliability, R-log-SafeBayes is consistently better than I-log-SafeBayes (but note that the latter is underconfident, which is arguably preferable over being overconfident, as Bayes is). All in al, there is no clear winner between the two methods.

Executive Summary: Joint Conclusions from Main and Additional Experiments
Standard Bayes In almost all our experiments, Standard Bayesian inference fails in its KL-associated prediction tasks (squared risk, reliability) when the model is wrong. Adopting a different prior (such as the g-prior) does not help, with two exceptions in model averaging: (a) when Raftery's prior (Section A.3) is used, then Bayes works quite well, but there it fails dramatically again (in contrast to SafeBayes) once the percentage of easy points is increased; (b) when it is run with a fixed variance that is significantly larger than the 'best' (pseudotrue) varianceσ 2 . Moreover, in the ridge regression experiment with fixed σ 2 , we find that standard Bayes can even perform much worse than SafeBayes when the model is correctso all in all we tentatively conclude that SafeBayes is safer to use for linear regression.
Safe Bayes R-square-SafeBayes is not competitive with the other SafeBayes methods and can even get worse than Bayes sometimes; this is due to an unwanted dependence on the specified scale σ 2 as explained in Section A. The other three SafeBayes methods behave reasonably well in all our experiments, and there is no clear winner among them. I-square-SafeBayes usually behaves excellently for the square-risk but cannot directly be used to assess its own performance. I-log-SafeBayes usually behaves excellently in terms of square-risk as well but is underconfident about its own performance (which is perhaps acceptable, overconfidence being a lot more dangerous). R-log-SafeBayes is usually good in terms of square-risk though not as good as I-log-SafeBayes, yet it is highly reliable. However, in Appendix B.1, we describe an initial idea for discounting the importance of the first few outcomes and explain why this might improve performance. When combined with this discounting idea, R-log-SafeBayes may actually always be competitive with the other two methods in terms of square-risk as well.
Learning η in Bayes-or Likelihood Way Fails Despite its intuitive appeal, fitting η to the data by e.g. empirical Bayes fails both in the model-wrong ridge experiment with a prior in σ 2 , where it amounts to Bayesian ridge regression ( Figure 7) and in the model-wrong fixed-variance ridge experiment (where it amounts to a method for learning the variance, see Section A. 1.2).

Robustness of Experiments
It does not matter whether the X i1 , X i2 , . . . are independent Gaussian, uniform or represent polynomial basis functions: all phenomena reported here persist for all choices. If the 'easy' points are not precisely (0, 0), but have themselves a small variance in both dimensions, then all phenomena reported here persist, but on a smaller scale.
Centering We repeated several of our experiments with centered data, i.e. preprocessed data so that the empirical average of the Y i is exactly 0 Raftery et al. (1997), Hastie et al. (2001). In none of our experiments did this affect any results. While this is not further mentioned in the appendix, there we also looked at the case where the true regression function has an intercept far from 0, and data are not centered. This hardly affected the SafeBayes methods.
Other Methods We also repeated the wrong-model experiment for other methods of model selection: AIC, BIC, and various forms of cross-validation. Briefly, we found that all these have severe problems with our data as well. Whereas in these experiments, cross-validation was used to identify a model index p and η played no role, in our final experiment we used leave-one-out cross-validation again to learn η itself. With the squared error loss it worked fine, which is not too surprising given its close similarity to I-square-SafeBayes. However, when we tried it with log-loss (as a likelihoodist or information-theorist might be tempted to do), it behaved terribly.

Bayes' Behavior Explained
In this section we explain how anomalous behavior of the Bayesian posterior may arise, taking a frequentist perspective. Section 6.1 is merely provided to give some initial intuition and may be skipped.

Explanation I: Variance Issues
Example 1 [Bernoulli] Consider the following very simple scenario: our 'model' consists of two Bernoulli distributions, We perform Bayesian inference based on a uniform prior on M. Suppose first that the data are, in fact, sampled i.i.d. from P θ * , where θ * is the 'true' parameter. The model is misspecified, in particular we will take a θ * ∈ {0.2, 0.8}. The log-likelihood ratio between the two distributions for data Y n with n 1 ones and n 0 = n − n 1 zeroes, measured for convenience in bits (base 2), is given by With uniform priors, the posterior will prefer θ = 0.2 as soon as L < 0. First suppose θ * = 1/2. Then both distributions in M are equally far from θ * in terms of KL divergence (or any other commonly used measure). By the central limit theorem, however, we expect that the probability that |L| > √ n/2 is larger than a constant for all large n; in this particular case we numerically find that, for all n, it is larger than 0.32.
This implies, that, at each n, with 'true' probability at least 0.32, min θ∈{0.2,0.8} π(θ | Y n ) ≈ 2 − √ n/2 . Thus, there is a nonnegligible 'true' probability that the posterior on one of the two distributions is negligibly small, and a naive Bayesian who adopted such a model would be strongly convinced that the other distribution would be better even though both distributions are equally bad. While this already indicates that strange things may happen under misspecification, we are of course more interested in the situation in which θ * = 1/2, so that one of the two distributions in M is truly 'better'. Now, if the 'true' parameter θ * is within O(1/ √ n) of 1/2, then, by the central limit theorem, the probability that L < 0 is nonnegligible. For example, if θ * is exactly 1/2 + 1/ √ n, then this probability is larger than 0.16 for all n. Thus, for values of θ * this close to 1/2, there is no way we can even expect Bayes to learn the 'best' value. For fixed (independent of n), larger values of θ * , like 0.6, the posterior will concentrate at 0.8 at an exponential rate, but the sample size at which concentration starts is considerably larger than the sample sized needed when the true parameter is, in fact 0.8. For example, at n = 50, P 0.6 (L < 0) ≈ 0.1, P 0.8 (L < 0) ≈ 2 · 10 −5 ; both probabilities go to 0 exponentially fast but their ratio increases exponentially with n. So, under a fixed θ * , with increasing n, Bayes may take longer to concentrate on the bestθ if θ = θ * (misspecification) than ifθ = θ * , but it eventually 'recovers' (this was seen in the ridge experiments of Section 5.4). Now, for larger models, the consequence of slower concentration of the log-likelihood ratio L is that the probability that some 'bad' P θ happens to 'win' is substantially larger than with a correct model. Grünwald and Langford (2007) showed that, in a classification context with an infinite-dimensional model, there are so many of such 'bad' P θ that Bayes does not recover any more, and the posterior keeps putting most of its mass on a bad model for ever (although the particular bad model on which it puts its mass, keeps changing). In this paper we empirically showed the same in a regression problem. Now one might conjecture that the issues above are caused by the fact that the model M is 'disconnected'. In the Bernoulli example above, the problem indeed goes away if instead of the model M, we adopt its 'closure' M = {P θ | θ ∈ [0.2, 0.8]}. However, highdimensional regression problems exhibit the same phenomenon, even if their parameter spaces are connected. It turns out that in general, to get concentration at the same rates as if the model were correct, the model must be convex, i.e. closed under taking any finite mixture of the densities, which is a much stronger requirement than mere connectedness. For standard Gaussian regression problems with Y | X ∼ N (0, σ 2 ), this would mean that we would have to adopt a model in which Y | X can be any Gaussian mixture with arbitrarily many components -which is clearly not practical (note that 'convex' refers to the densities, not the regression functions (Grünwald and Langford, 2007, Section 6. 3.5)). Barron (1998) showed that sequential Bayesian prediction under a logarithmic score function shows excellent behavior in a cumulative risk sense; for a related result see (Barron et al., 1999, Lemma 4). Although Barron (1998) focuses on the well-specified case, this assumption is not required for the proof and the result still holds even if the model M is completely wrong. For a precise description and proof of this result emphasizing that it holds under misspecification, see (Grünwald, 2007, Section 15.2). At first sight, this leads to a paradox, as we now explain.

Explanation II: Good vs. Bad Misspecification
A Paradox? Letθ index the KL-optimal distribution in Θ as in Section 2.1. The result of Barron (1998) essentially says that, for arbitrary models Θ, for all n, where risk log (W ), for a distribution W on Θ, is defined as the log-risk obtained when predicting by the W -mixture of f θ , i.e.
In (33), this coincides with log-risk of the Bayes predictive densityf (· | Z i−1 ), as defined by (8). Here, as in the remainder of this section, we look at the standard Bayes predictive density, i.e. η = 1. red n is the so-called relative expected stochastic complexity or redundancy (Grünwald, 2007), which depends on the prior and for 'reasonable' priors is typically small. The result thus means that, when sequentially predicting using the standard predictive distribution under a log-scoring rule, one does not lose much compared to when predicting with the log-risk optimalθ. When M is a union of a finite or countably infinite number of parametric exponential families andp < ∞ is well-defined, then, under some further regularity conditions, which hold in our regression example, Grünwald (2007), the redundancy is, up to O(1), equal to the BIC term (k/2) log n, wherek is the dimensionality of the smallest model containingθ.
In the regression case, Mp hasp + 2 parameters (β 0 , . . . , β p , σ 2 ), so in the two experiments of Section 5,k = 6. Thus, in our regression example, when sequentially predicting with the standard Bayes predictivef (· | Z i−1 ), the cumulative log-risk is at most n · risk log (θ) which is linear in n, plus a logarithmic term that becomes comparatively negligible as n increases. This is confirmed by Figure 10 below. Now, for each individual θ = (p, β, σ 2 ) we know from Section 2.3 that, if its log-risk is close to that ofθ, then its square-risk must also be close to that ofθ; andθ itself has the smallest square-risk among all θ ∈ Θ. Hence, one would expect the reasoning for log-risk to transfer to square-risk: it seems that when sequentially predicting with the standard Bayes predictivef (· | Z i−1 ), the cumulative square-risk should at most be n times the instantaneous square-risk ofθ plus a term that hardly grows with n; in other words, the cumulative square-risk from time 1 to n, averaged over time by dividing by n, should rapidly converge to the constant instantaneous risk ofθ. Yet the experiments of Section 5 clearly show that this is not the case: Figure 3 shows that, until n = 100, it is about 3 times as large.
This 'paradox' is resolved once we realize that the Bayesian predictive densityf (· | i−1 ) is a mixture of various f θ , and not necessarily similar to f θ for any individual θ -the link between log-risk and square-risk (4) only holds for individual θ = (p, β, σ 2 ), not for mixtures of them. Indeed, if at each point in time i,f (· | Z i ) would be very similar (in terms of e.g. Hellinger distance) to some particular f θ i with θ i ∈ Θ, then there would really be a contradiction. Thus, the discrepancy between the good log-risk and bad square-risk results in fact implies that at a substantial fraction of sample sizes i,f (· | Z i ) must be substantially different from every θ ∈ Θ. In other words, the posterior is not concentrated at such i. A cartoon picture of this situation is given in Figure 9: the Bayes predictive achieves small log-risk because it mixes together several distributions into a single predictive distribution which is very different from any particular single f θ ∈ M. By Barron's bound, (33), the resultingf (· | Z i ) must, averaged over i, have at most a risk almost as small as the risk ofθ. We can thus, at least informally, distinguish between "benign" and "bad" misspecification. Bad misspecification occurs if there is a nonnegligible probability that for a range of sample sizes, the predictive distribution is substantially different from any of the distributions in M. As Figure 9 suggests, 'bad' misspecification cannot occur for convex models M -and indeed, the results by Li (1999) suggest that for such models consistency holds under weak conditions for any η < 1, even under misspecification.

Hypercompression
The picture suggests that, if, as in our regression model, the model is nonconvex (i.e. the set of densities {f θ | θ ∈ Θ} is not closed under taking mixtures), thenf (· | Z i ) might in fact be significantly better in terms of log-risk than the bestθ, and its individual constituents might even all be substantially worse thanθ. If this were indeed the case then, with high P * -probability, we would also get the analogous result for an actual sample (and not just in expectation): the cumulative log-risk obtained by the Bayes predictive should be significantly smaller than the cumulative log-risk achieved with the optimalf . Figure 10 below shows that this indeed happens with our data, until n ≈ 100.

The No-Hypercompression Inequality
In fact, Figure 10 shows a phenomenon that is virtually impossible if the Bayesian's model and prior are 'correct' in the sense that data Z n would behave like a typical sample from them: it easily follows from Markov's inequality (for details see (Grünwald, 2007, Chapter 3)) that, letting Π denote the Bayesian's joint Figure 9: Benign vs. Bad Misspecification:P = arg min P ∈M D(P * P ) is the distribution in model M that minimizes KL divergence to the 'true' P * , but, since the model is nonconvex, the Bayes predictive distributionP may happen to be very different from any P ∈ M. When this happens, we can have 'bad misspecification' and then it may be necessary to decrease the learning rate (in this simplistic drawingP is a mixture of just two distributions; in our regression example it mixes infinitely many). Yet if P * were such that inf P ∈M D(P * P ) does not decrease if the infimum is taken over the convex hull of M (e.g. if Q rather thañ P reached the minimum), then any learning rate η < 1 is fine ('benign' misspecification). In the picture, we even have D(P * P ) < D(P * P ); in this case we can get hypercompression. distribution on Θ × Z n , for each K ≥ 0, i.e. the probability that the Bayes predictivef cumulatively outperforms f θ , with θ drawn from the prior, by K log-loss units is exponentially small in K. Figure 10 below thus shows that at sample size n ≈ 90, an a-priori formulated event has happened of probability less than e −30 , clearly indicating that something about our model or prior is quite wrong.
Since the difference in cumulative log-loss betweenf and f θ can be interpreted as the amount of bits saved when coding the data with a code that would be optimal underf rather than f θ , this result has been called the no hyper-compression inequality by Grünwald (2007). The figure shows that for our data, we have substantial hypercompression.
The Safe Bayes Error Measure As seen from (18), SafeBayes measures the performance of η-generalized Bayes not by the cumulative log-loss, as standard Bayes does, but instead by the cumulative posterior-expected error when predicting by drawing from the posterior. One way to interpret this alternative error measure is that, at least in expectation, we cannot get hypercompression. Defining (compare to (34)!) we get by Fubini's theorem, where the inequality follows by definition ofθ being log-risk optimal among Θ. There is thus a crucial difference between risk R-log and risk log -for the latter we just argued that, under misspecification, risk log (W ) − risk log (θ) ≤ 0 is very well possible. Thus, in contrast to predicting with the mixture density E θ∼W f θ , prediction by randomization (first sampling θ ∼ W and then predicting with the sampled f θ ) cannot 'exploit' the fact that mixture densities might have smaller log-risk than their components. Thus, if the difference (36) is small, then W must put most of its mass on distributions θ ∈ Θ that have small log-risk themselves. For individual θ, we know that small log-risk implies small square-risk. This implies that if (36) is small, then the (standard) posterior is concentrated on distributions with small R-square-risk. Figure 10 and Figure 11 show the predictive capabilities of Standard Bayes in the wrong model example in terms of cumulative and instantaneous log-loss on a simulated sample. The graphs clearly demonstrate hypercompression: the Bayes predictive cumulatively performs better than the best single model/the best distribution in the model space, until at about n ≈ 100 there is a phase transition. At individual points, we see that it sometimes performs a little worse, and sometimes (namely at the 'easy' (0, 0) points) much better than the best distribution.

Experimental Demonstration of Hypercompression for Standard Bayes
We also see that, as anticipated above, randomized and in-model Bayesian prediction do not show hypercompression and in fact perform terribly on the log-loss until the phase transition at n = 100, when they becomes almost as good as standard Bayes. We see that for η = 1, they perform much worse. An important conclusion is that if we are only interested in logloss prediction, it is clear that we just want to use Bayes rather than randomized or in-model prediction with different η.
As an aside, we note that the first few outcomes have a dramatic effect on cumulative R-and I-log-loss (it disappears from Figure 11); this may be due to the fact that our densities -other than those considered by Grünwald (2012) have unbounded support so that there is no v such that Theorem 1 below holds. This observation inspired the idea described in Appendix B.1 about ignoring the first few outcomes when determining the optimal η. Also, we emphasize that the hypercompression phenomenon takes places more generally, not just in our regression setup -for example, the classification inconsistency noted by Grünwald and Langford (2007) can be understood in terms of hypercompression as well. Figure 12 gives some clues as to how hypercompression is achieved: it shows the variance of the predictive distributionf (· | Z 50 ) as a function of S ∈ [−1, 1] for the polynomial example of Figure 1 in the introduction, at sample size n = 50, where hypercompression takes place. Figure 1 gave the posterior mean (regression function) at n = 100; the function at n = 50 looks similar, correctly having mean 0 at S = 0 but, incorrectly, mean far from 0 at most other S. The predictive distribution conditioned on the MAP model Mp map(Z 50 ) is a t-distribution with approximatelyp map(Z 50 ) ≈ 50 degrees of freedom, which means that it is approximately normal. Figure 12 shows that its variance  (18) and (22) respectively of standard Bayesian prediction (η = 1) on a single run for the model-averaging experiment of Figure 3. We clearly see that standard Bayes achieves hypercompression, being better than the best single model. And, as predicted by theory, randomized Bayes is never better than standard Bayes, whose curve has negative slope because the densities of outcomes are > 1 on average.  is much smaller than the varianceσ 2 at S = 0; as a result, its log-risk conditional on U = 0 is smaller than that ofθ = (p,β,σ 2 ) by some large amount A. Conditioned at S = 0, its conditional mean is off by some amount, and its variance is, on average, slightly (but not much) smaller thanσ 2 , making its conditional log-risk given U = 0 larger than that ofθ by an amount A where, it turns out, A is smaller than A. Both events S = 0 and S = 0 happen with probability 1/2, so that the final, unconditional log-risk off (· | Z 50 ) is smaller than that ofθ.

How Hypercompression arises in Regression
Summarizing, hypercompression occurs because the variance of the predictive distribution conditioned on past data and a new X is much smaller thanσ 2 at X = 0. This suggests that, if instead of a prior on σ 2 we use models M p with a fixed σ 2 , we can only get hypercompression (and correspondingly bad square-risk behaviour) if σ 2 σ 2 , because the predictive variance based on linear models M p with fixed variance σ 2 given X = x is, for all x, lower bounded by σ 2 . Our experiments in Appendix A.1 confirm that this is indeed what happens.

Explanation III: The Mixability Gap & The Bayesian Belief in Concentration
As we indicated at the end of Section 6.2, bad misspecification can occur only if the standard (η = 1) posterior is nonconcentrated 1 . Intriguingly, by formalizing 'concentration' in the appropriate way, we will now show, under some conditions on the prior, that a Bayesian a priori always believes that the posterior will concentrate very fast. Thus, if we observe data Z n , and for many n ≤ n, the posterior based on Z n is not concentrated, then we can view this as an indication of bad misspecification. In the next subsection we will see that SafeBayes selects aη 1 iff we have such nonconcentration at η = 1. Thus, SafeBayes can partially be understood as a prior predictive check, i.e. a test whether the assumptions implied by the prior actually hold on the data (Box, 1980).

The Mixability Gap
We express posterior nonconcentration in terms of the mixability gap (Grünwald, 2012, de Rooij et al., 2014. In this section we only consider the special case of η = 1 (standard Bayes), for which the mixability gap δ i is defined as the difference between 1-R-log-loss (18) and standard log-loss as obtained by predicting with the posterior predictive, at sample size i: Straightforward application of Jensen's inequality as in (19) gives that δ i ≥ 0. δ i , which depends on z 1 , . . . , z i , is a measure of the posterior's concentratedness at sample size i when used to predict y i given x i : it is small if f θ (y i | x i ) does not vary much among the θ that have substantial η-posterior mass; by strict convexity of − log, it is 0 iff there exists a set Θ 0 We set the cumulative mixability gap to be ∆ n := n i=1 δ i .

The Bayesian Belief in Posterior Concentration
As a theoretical contribution of this paper, we now show that, under some conditions on model and prior, if the data are as expected by the model and prior, then the expected mixability gap goes to 0 as O((log n)/n), and hence a Bayesian automatically a priori believes that the posterior will concentrate fast. For simplicity we restrict ourselves to a model M = {P θ : θ ∈ Θ} where Θ is countable, and we let all θ ∈ Θ represent a conditional distribution for Y given X, extended to n outcomes by independence. We let π be a probability mass on Θ, and define the joint Bayesian distribution Π on Θ × Y n | X n in the usual way, so that for measurable A ⊂ Y n , Π((θ * , A) | X n = x n ) = π(θ * ) · P θ * (A | X n = x n ). The random variable θ * refers to the θ chosen according to density π. We will look at the Bayesian probability distribution of the θ * -expected mixability gap,δ n := E θ * [δ n ].
Theorem 1 Consider a countable model with prior Π as above. Suppose that the density ratios in Θ are uniformly bounded, i.e. there is a v > 1 such that for all x, y ∈ X × Y, all θ, θ ∈ Θ, f θ (y | x)/f θ (y | x) ≤ v. Suppose that for some η < 1 we have θ π(θ) η < ∞. Then for every a > 0 there are constants C 0 and C 1 such that, for all n, Moreover, for any 0 < a ≤ 1, there exist C 2 and C 3 such that i.e. the Bayesian believes that the mixability gap will be small on average and that the cumulative mixability gap will be small with high probability.
Thus, with high probability, ∆ n grows only polylogarithmically, even though it is the difference of two quantities that are typically linear in n. This means that observing a large value of ∆ n strongly indicates misspecification.
We hasten to add that the regularity conditions for Theorem 1 do not hold in the regression problem we study in this paper; the theorem is merely meant to show that ∆ n is believed to be small in idealized circumstances that have been simplified so as to make mathematical analysis easier. Note however, that the regularity conditions do not constrain Θ in the most important respect: by allowing countably infinite Θ, we can approximate nonparametric models arbitrarily well by suitable covers (Cover and Thomas, 1991). In particular we do allow sets Θ for which maximum likelihood methods would lead to disastrous overfitting at all sample sizes. Also the condition that π(θ) η < ∞ is standard in proving Bayesian and MDL convergence theorems Cover, 1991, Zhang, 2006a). In fact, since the constants C 0 and C 1 scale logarithmically in v, we expect that Theorem 1 can be extended to the regression setting we are dealing with here as long as all distributions in the model have exponentially small tails, using methods similar to those in Grünwald (2014 As we showed in that example, at any given n, with P θ * -probability at least 0.32, min θ∈{0.2,0.8} π(θ | Y n ) ≈ 2 − √ n/2 : the posterior puts almost all mass on one θ. Lemma 6 of van Erven et al. (2011) shows that in such cases δ n is small; in this particular case, δ n ≤ 2(e − 2) min θ∈{0.2,0.8} π(θ | Y n ) ≈ 1.42 · 2 − √ n/2 . Thus, the posterior looks exceedingly concentrated at time n, with nonnegligible probability (this unwarranted confidence is a simplified version of what was called the fair balance paradox by Yang (2007), who conjectured it is the underlying reason for the problem of 'overconfident posteriors' in Bayesian phylogenetic tree inference). However, Safe Bayes detects misspecification by looking at cumulative concentration, i.e. the sum of the δ's: L as in (32) can be interpreted as a random walk on Z starting at the origin, with equal probabilities to move to the left and to the right. By the central limit theorem, the random walk crosses the origin at time n with probability about 1/ nπ/2 =Õ(n −1/2 ), so that we may conjecture that, with high probability, it crosses the originÕ(n · n −1/2 ) =Õ(n 1/2 ) times. Each time it crosses the origin, the posterior is uniform and hence as nonconcentrated as it can be, and ∆ n is increased by at least a fixed constant. One would therefore expect (under the 'true' θ * ) that ∆ n is of order √ n, which by Theorem 1 is much larger than a Bayesian a priori expect it to be -the model fails the 'prior predictive check '. 2

How Safe Bayes Works
In its simplest form, the in-model fixed variance case, SafeBayes finds theη that minimizes cumulative square-loss on the sample and thus can simply be understood as a pragmatic attempt to find aη that achieves small risk. However, the other versions of SafeBayes do not have such an easy interpretation. To explain them further, we need to generalize the notion of mixability gap in terms of the η -flattened η-generalized Bayesian predictive density. The latter is defined, for η, η ≤ 1, as: By Jensen's inequality, for any η ≤ 1, any ( The log-loss achieved by η-generalized, η -flattened Bayesian prediction is called (η, η )-mix-loss from now on, following terminology from de Rooij et al. (2014). For 0 < η ≤ η ≤ 1, the mixability gap δ i,η,η is defined as the difference between the η-R-log-loss and the η -mix-loss: We once again define a cumulative version ∆ n,η,η = n i=1 δ i,η,η , and note that the definitions are compatible with the special cases δ i := δ i,1,1 and ∆ n := ∆ n,1,1 defined in the previous subsection. Now we can rewrite the cumulative R-log-loss achieved by Bayes with the ηgeneralized posterior as where is the cumulative (η, η )-mix-loss. (42) holds for all 0 < η ≤ η ≤ 1. Consider first η = 1.
As was seen, if ∆ n,1,1 is large, then this indicates potential bad misspecification. But (42) still holds for smaller η < 1; by Jensen's inequality, for any fixed η, decreasing η will make ∆ n,η,η smaller as well. Indeed, for any fixed P * , defininḡ where the supremum is over all distributions on Θ, we have lim η ↓0δ η = 0, so we have an upper bound on the expectation of ∆ n,η,η independent of the actual data that, for small enough η , will become negligibly small. But the left-hand side of (42) does not depend on η , so if, by decreasing η , we decrease ∆ n,η,η , CML n,η,η must increase by the same amount -so as yet we have gained nothing. Indeed, not surprisingly, Barron's bound does not hold any more for CML n,η,η with η = 1 and η < 1 (and in general, it does not hold for η, η whenever η < η). But, it turns out, a version of Barron's bound still holds for CML n,η ,η , for all η > 0: the cumulative log-risk of η -flattened, η -generalized Bayes is still guaranteed to be within a small red n of the cumulative log-risk ofθ, although red n does monotonically increase as η gets smaller -simply because the prior becomes more important relative to the data (standard results in learning theory show that CML n,η,η is monotonically decreasing in η, and can be upper bounded as O(1/η); see e.g. (de Rooij et al., 2014, Lemma 1). Thus, it makes sense to consider the special case η = η, and think of SafeBayes as finding the η minimizing since we have clear interpretations of both terms: the second indicates, by Barron's bound, how much worse the η-generalized posterior predicts in terms of log-loss compared to the optimalθ; the first indicates how much is additionally lost if one is forced to predict by distributions inside the model. The second term decreases in η, the first has an upper bound which increases in η. SafeBayes can now be understood as trying to minimize both terms at the same time. Now broadly speaking, the central convergence result of Grünwald (2012) states that ∆ n,η,η will be 'sufficiently small' for all η < 1, and under some further conditions even for η = 1, if the model is correct or convex; and it will also be 'sufficiently small' if the model is incorrect, as long as η is smaller than some 'critical' value η crit (which may depend on n though). Here 'sufficiently small' means that it is not the dominating term in (43). Intuitively, we would like theη determined by SafeBayes to be the largest η that is smaller than η crit . Grünwald (2012) shows that Safe Bayes indeed finds such an η, and that prediction based on the generalized posterior with this η achieves good frequentist convergence rates.
Experimental Illustration: Consider the main wrong-model experiment of Section 5. Figure 13 shows, as a function of η, in red, the cumulative η-R-log-loss achieved by Safe Bayes, averaged over 30 runs of Experiment 1 (Bayesian model averaging with incorrect model) of Figure 3. In each individual run, Safe Bayes picks theη minimizing this quantity; we thus get that on most runs,η is close to 0. 4. In contrast to η-R-log-loss, and as predicted by theory, the η-mix-loss (in purple) decreases monotonically and coincides with the standard Bayesian log-loss at η = 1 and with the η-R-log-loss as η ↓ 0. We also see hypercompression again: near η = 1, both the Bayesian log-loss and the mix-loss are smaller than the log-loss achieved by the bestθ in the model. At η = 0.5, there is a sudden sharp rise in ∆ n,η,η (the difference between the red and purple curves). We can think of Safe Bayes as trying to identify this 'critical' η crit .
Theorem 1 shows that, if both model and prior are well-specified, then the Bayesian posterior cumulatively concentrates in a very strong sense. More generally, if the model is correct but also if there is 'benign' misspecification, then, under some conditions on the prior, by the results of Grünwald (2012), the Bayesian posterior eventually cumulatively concentrates at η = 1. One might thus be tempted to interpret η crit (the learning rate which SafeBayes tries to learn) as 'largest learning rate at which the posterior cumulatively concentrates'. However, this interpretation works only if η crit = 1. If η crit < 1, we can only show that, for every η < η crit , ∆ n,η,η is small; true cumulative concentration would instead mean that ∆ n,η,1 is small for such η (note we must have ∆ n,η,η ≤ ∆ n,η,1 by Jensen). The figure shows that ∆ n,η,1 (the difference between the red and blue curve) may indeed be large even at small η. A better interpretation is that, for every fixed η, with decreasing η , the geometry of the (η, η )-mix-loss changes, so that the loss difference between the mix loss and the R-log-loss obtained by randomization gets smaller. By then further using the generalized posterior for the same η , we guarantee that a version of Barron's bound holds for the (η , η )-mix-loss. Figure 13: Cumulative losses up to sample 100 (where the posterior has not converged yet) as a function of η, averaged over 30 runs, for the experiment of Figure 3. η-B-log-loss is the cumulative log-loss achieved by standard Bayes with the η-generalized posterior.
Replacing R-by I-loss Although the proofs of Grünwald (2012) are optimized for R-SafeBayes, the same story as above can be told for any fixed transformation from the posterior to a possibly randomized prediction, i.e. anything of the form (21); in particular for the most extreme transformation where we replace the posterior predictive by the distribution indexed by the posterior mean parameters so that instead of R-SafeBayes we end up with I-SafeBayes. In fact, the importance of the distinction between 'in-model' and 'out-model' prediction under model misspecification has been emphasized before (Grünwald, 2007, Barron and Hengartner, 1998, Kot lowski et al., 2010. In general, although we do not know how to exploit this intuition to strengthen the convergence proofs of Grünwald (2012), it seems more natural to replace the randomized predictions by deterministic, in-model predictions.

Discussion, Open Problems and Conclusion
"If a subjective distribution Π attaches probability zero to a non-ignorable event, and if this event happens, then Π must be treated with suspicion, and modified or replaced" (emphasis added) -A.P. Dawid (1982).
We already discussed the theoretical significance of the inconsistency result in the introduction. Extensive further discussion on Bayesian inference under misspecification is given by  and Grünwald and Langford (2007). For us, it remains to discuss the place of both the inconsistency result and our solution in Bayesian methodology.
Following the well-known Bayesian statisticians Box (1980), Good (1983), Dawid (1982Dawid ( , 2004 and Gelman (2004) (see also Gelman and Shalizi (2012)), we take the stance that model checking is a crucial part of successful Bayesian practice. When there is a large discrepancy between a model's predictions and actual observations, it is not merely sufficient to keep gathering data and update one's posterior: something more radical is needed. In many such cases, the right thing to do is to go back to the drawing board and try to devise a more realistic model. However, we think this story is incomplete: in machine learning and pattern recognition, one often encounters situations in which the model employed is obviously wrong in some respects, yet there is a model instantiation (parameter vector) that is pretty adequate for the specific prediction task one is interested in. Examples of such obviouslywrong-yet-pretty-adequate models are, like in this paper, assuming homoskedasticity in linear regression when the goal is to approximate the true regression function and the true noise is heteroskedastic 3 , but also the use of N -grams in language modeling (is the probability of a word given the previous three words really independent of everything that was said earlier?), logistic regression in e.g. spam filtering, and every single successful data compression method that we know of (see Bayes and Gzip (Grünwald, 2007, Chapter 17, page 537)). The difference with the more standard statistical (be it Bayesian or frequentist) mode of reasoning is eloquently described in Breiman's (2001) the two cultures 4 . Bayesian inference is among the most successful methods currently used in the obviously-wrong-yet-pretty-adequate-situation (to witness, state-of-the-art data compression methods such as Context-Tree-Weighting Willems et al. (1995) have a Bayesian interpretation). Yet the present paper shows that there is a danger: even if the employed model is pretty adequate (in the sense of containing a pretty good predictor), the Bayesian machinery might not be able to find it. The Safe Bayesian algorithm can thus be viewed as an attempt to provide an alternative for the data-analysis cycle (Gelman and Shalizi, 2012) to this, in some sense, less ambitious setting: just like in the standard cycle, we do a model check, albeit a very specific one: we check whether there is 'cumulative concentration of the posterior' (see Section 6.4). If there is not, we know that we may not be learning to predict as well as the best predictor in our model, so we modify our posterior. Not in the strong sense of 'going back to the drawing board', but in the much weaker sense of making the learning rate smaller -we cannot hope that our model of reality has improved, because we still employ the same model -but we can now guarantee that we are doing the best we can with our given model, something which may be enough for the task at hand and which, as our experiments show, cannot always be achieved with standard Bayes.
Benign vs. Bad Misspecification One might argue that the example of this paper is rather extreme, and that in practical situations, choosing a learning rate different from 1 may never be a useful thing to do. A crucial point here is that one can have 'benign' and 'bad' misspecification (Section 6.2). Under benign misspecification, standard Bayes with η = 1 will behave nicely under weak assumptions on the prior. While in our particular example, after 'eyeballing' the data one would probably have chosen a different, less misspecified model, it may be the case that 'bad' misspecification (as in Figure 9) also occurs, at least to some extent, in general, real-world data and is then not so easily spotted. Since we simply do not 3 As long as, as in this paper, the tails of the conditional distribution of Y given X = x are sub-Gaussian, for each x; if they are not, there may be real outliers and then one cannot say that the model is 'pretty adequate' any more. 4 The 'two cultures' does not refer to the Bayesian-frequentist divide, but to the modeling vs. predictiondivide. We certainly do not take the extreme view that statisticians should only be interested in prediction tasks such as classification and square-error prediction rather than density estimation and testing; our point is merely that in some cases, the goal of inference is clearly defined (it could be classification, but it could also be determination whether some random variables are (conditionally) (in)dependent), whereas part of our model is unavoidably misspecified; and in such cases, one may want to use a generalized form of Bayesian inference.
know whether such situations occur in practice, to be on the safe side, it seems desirable to have a theory about when we can get away with using standard Bayesian inference for a given prediction task even if the model is wrong, and how we can still use it with little modification if there is bad misspecification. Our work (esp. the theoretical counterpart to this paper (Grünwald, 2014)) is a first step in this direction.
Towards a Theory of Bayesian Inference under Misspecification What we have in mind is a theory of Bayesian inference under misspecification, in which the goal of learning plays a crucial role. The standard Bayesian approach is very ambitious: it can be used to solve every conceivable type of prediction or inference task. Every such task can be encoded as a loss or utility function, and, given the data and the prior, one merely has to calculate the posterior, and then makes an optimal decision by taking the act that minimizes expected loss or maximizes expected utility according to the posterior. Crucially, one uses the same posterior, independently of the utility function at hand, implying that one believes that one's own beliefs are correct in every possible respect. We envision a more modest approach, in which one acknowledges that one's beliefs are only adequate in some respects, not in others; how one proceeds then depends on how one's model and loss function interact. For example, if one is interested in data-compression then, this problem being essentially equivalent to cumulative log-loss prediction, by Barron's (1998) bound one can simply use the standard (η = 1) Bayesian predictive distribution -even under misspecification, this will guarantee that one predicts (at least!) as well as one could with the best element of one's model. If, on the other hand, one is interested in any of the KL-associated inference tasks (for linear models, these are square-loss and reliability, Section 2.3), then using η = 1 is not sufficient anymore, and one may have to learn η from the data, e.g. in the Safe Bayesian manner. Finally, if we are interested in an inference task that is not KL-associated under our model (i.e., a model instance can be good in the KL sense but bad in the task of interest), then a more radical step is needed: either go back to the drawing board and design a new model after all; or perhaps, the model can be changed in a more pragmatic way so that, for the right η, η-generalized Bayes once again will find the best predictor for the task at hand. Let us outline such a procedure for the case that the inference talk is simply prediction under some loss function : Y ×Ŷ → R. In this case, if the -risk is not KL-associated this simply means that, for some data, the log likelihood is not a monotonic function of the loss . To get the desired association, we may associate each conditional distribution P θ (Y | X) in the model with its associated Bayes act δ θ : δ θ (x) is defined as the actŷ ∈Ŷ which minimizes P θ | X = x-expected loss E Y ∼P θ |X=x [ (y,ŷ)]. We can then define a new set of densities and perform (generalized) Bayesian inference based on these. Note that this effectively replaces, for each θ, the full likelihood by a 'likelihood' in which some information has been lost, and is thus reminiscent of what is done in pseudo-likelihood (Besag, 1975) substitution likelihood (Jeffreys, 1961, Dunson andTaylor, 2005), or rank-based likelihood (Gu and Ghosal, 2009) approaches (as a Bayesian, one may not want to loose information, but whether this still applies in nonparametric problems (Robins and Wasserman, 2000) let alone under misspecification (Grünwald and Halpern, 2004) is up to debate).
(44) can be made precise in two ways: either one just sets γ and Z(γ) to 1, and allows the f new θ to be pseudo-densities, not necessarily integrating to 1 for each x. This is a standard approach in learning theory Zhang (2006b), Catoni (2007). One could then learn η by, e.g., the basic SafeBayes algorithm with θ (x, y) := (y, δ θ (x)) instead of log-loss. Or, one could define Z(γ) so that the densities normalize (how to achieve this if y e −γ (y,δ θ (x)) dy depends on x is explained by Grünwald (2008)) and put a prior on γ as well (for linear models, this is akin to putting a prior on the variance). This will make the loss KL-associated and the KL-optimalθ will also have the reliability property, see again Grünwald (2008) for details. In this case we will get, with z i = (x i , y i ), θ (z i ) := (y i , δ θ (x i )), and using a prior on Θ and the scaling parameter γ, that the η-generalized posterior becomes This idea was, in essence, already suggested by (Grünwald, 1998, Example 5.4) (see also Grünwald (1999)) under the name of entropification (however, Grünwald's papers wrongly suggest that, by introducing the scale parameter γ, it would be sufficient to only consider η = 1); see also (Lacoste-Julien et al., 2011, Quadrianto andGhahramani, 2014). Now both 'pure' subjective Bayesians and 'pure' frequentists might dismiss this program as severe ad-hockery: the strict Bayesian would claim that nothing is needed on top of the Bayesian machinery; the strict frequentist would argue that Bayesian inference was never designed to 'work' under misspecification, so in misspecified situations it might be better to avoid Bayesian methods altogether rather than trying to 'repair' them. We strongly disagree with both types of purism, the reason being the ever-increasing number of successful applications of Bayesian methods in machine learning in situations in which models are obviously wrong. We would like to challenge the pure subjective Bayesian to explain this success, given that the statistician is using a priori distributions that reflect beliefs which she knows to be false, and are thus not really her beliefs. We would like to challenge the pure frequentist to come up with better, non-Bayesian methods instead. In summary, we would urge both purists not to throw away the Bayesian baby with the misspecified bath water! Moreover, from a prequential (Dawid, 1984), learning theory (citations see below) and Minimum Description Length (MDL ) perspective, the extension from Bayes to SafeBayes is perfectly natural. From the prequential perspective, SafeBayes seeks to find the largest η at which the generalized Bayesian predictions have a predictive interpretation in terms of the loss of interest rather than the log-loss. The learning theory and MDL perspectives are further explained in the next section.

Related Work I: Learning Theory and MDL
Learning Theory From the learning theory perspective, generalized Bayesian updating as in (45) with Z(γ) set to 1 can be seen as the result of a simple regularized loss minimization procedure (this was probably first noted by Williams (1980); see in particular Zhang (2006b)), which means that it continues to make sense if exp(−γ θ ) as in (44) does not have a direct probabilistic interpretation. Variations of such generalized Bayesian updating are known as "aggregating algorithm", "Hedge" or "exponential weights", and often have good worst-case optimality properties in nonstochastic settings (Vovk, 1990, Cesa-Bianchi andLugosi, 2006) -but to get these the learning rate must often be set as small as O(1/ √ n). Similarly, PAC-Bayesian inference (Audibert, 2004, Zhang, 2006b, Catoni, 2007 (for a variation, see Freund et al. (2004)) is also based on a posterior of form (44) and can achieve minimax optimal rates in e.g. classification problems by choosing an appropriate η, usually also very small. From this perspective, SafeBayes can be understood as trying to find a larger η than the worst-case optimal one, if the data indicate that the situation is not worst-case and faster learning is possible. Finally, Bissiri et al. (2013) give a motivation for (45) (with Z(γ) ≡ 1) based on coherence arguments that are more Bayesian in flavour.
MDL Of particular interest is the interpretation of the SafeBayesian method in terms of the MDL principle for model selection, which views learning as data compression. When several models for the same data are available, MDL picks the model that extracts the most 'regularity' from the data, as measured by the minimum number of bits needed to code the data with the help of the model. This is an interpretation that remains valid even if a model is completely misspecified (Grünwald, 2007). The resulting procedure (based on socalled normalized maximum likelihood codelengths) is operationally almost identical to Bayes factor model selection. Thus, it provides a potential answer to the question 'what does a high posterior belief in a model really mean, since one knows all models under consideration to be incorrect any way?' (asked by, e.g., Gelman and Shalizi (2012)): even if all models are wrong, the information-theoretic MDL interpretation stands. However, our work implies that there is a serious issue with these NML codes: note that any distribution P in a model M can be mapped to a code (the Shannon-Fano code) that would be optimal in expectation if data were sampled from P . Now, our work shows that if the data are sampled from some P * ∈ M, then the codes based on Bayesian predictive distributions can sometimes compress substantially better in expectation than can be done based on any P ∈ Mthis is the hypercompression phenomenon of Section 6. 3. The same thing then holds for the NML codes, which assign almost the same codelengths as the Bayesian ones. Our work thus invalidates the interpretation of NML codelengths as 'compression with the help of (and only of!) the model', and suggests that, similarly to in-model SafeBayes one should design and use 'in-model' versions of the NML codes instead -codes that are guaranteed not to outperform, at least in expectation, the code based on the best distribution in the model.

Related Work II: Analysis of Bayesian Behavior under Misspecification
Consistency Theorems The study of consistency and rate of convergence under misspecification for likelihood-based and specifically Bayesian methods go back at least to Berk (1966). For recent state-of-the-art work on likelihood-based, non-Bayesian methods see e. g. Dümbgen et al. (2011) and the very general Spokoiny et al. (2012). Recent work on Bayesian methods includes Kleijn andvan der Vaart (2006), De Blasi and and  who obtained results in quite general, i.i.d. nonparametric settings, non-i.i.d. settings (Shalizi, 2009), and more specific settings ; see also Grünwald (2014). Yet, as explicitly remarked by De Blasi and , the conditions on model and prior needed for consistency under misspecification are generally stronger than those needed when the model is correct. Essentially, if the data are i.i.d. both according to the model and the sampling distribution P * , then Theorem 1 (in particular its Corollary 1) of De Blasi and Walker (2013) implies the following: if, for all > 0, the model can be covered by a finite number of -Hellinger balls, then the Bayesian posterior eventually concentrates: for all δ, γ > 0, the posterior mass on distributions within Hellinger distance δ of the Pθ that is closest to P * in KL divergence will become larger than 1 − γ for all n larger than some n γ . This implies that both in the ridge regression (finite p) and in the model averaging experiments (finite p max ), Bayes eventually 'recovers' -as we indeed see in our experimental results. However, if p max = ∞, then the model has no finite Hellinger cover any more for small enough and indeed the conditions for Theorem 1 of De Blasi and Walker (2013) do not apply any more. Our results show that in such a case we can indeed have inconsistency if the model is incorrect. On the other hand, even if p max = ∞, we do have consistency in the setup of our correct-model experiment for the standard Bayesian posterior, as follows from the results by Zhang (2006a).
The Limiting η = 1 Like several earlier results Cover, 1991, Walker andHjort, 2002), Zhang's consistency results for correct models hold under very weak conditions for generalized Bayes with any η < 1, and only under much stronger conditions for η = 1. Zhang provides an example of inconsistency-like behavior in the well-specified case with η = 1 that automatically disappears as soon as one picks η < 1, leading Zhang (2006a) to claim that in general, generalized Bayesian methods (η < 1) are more stable than standard Bayesian ones. Zhang's example, and the example of Bayesian model selection inconsistency in a wellspecified model by Csiszár and Shields (2000) are closely related to ours, in that the Bayes predictive distribution for η = 1 becomes significantly different from any distribution in the model (see Figure 9). In their examples, the problem is resolved by taking any η < 1; in our misspecification case, η should even be taken much smaller.
Anomalous Behavior and Modifications of Bayesian Posterior under Misspecification Anomalous behavior of Bayesian inference under misspecification was, of course, observed before, e.g. (less dramatically than here) by Yang (2007), Müller (2013) and (as dramatically, but involving a very artificial model) Grünwald and Langford (2007). Presumably also related is the 'brittleness' of Bayesian inference that has been observed by Owhadi and Scovel (2013). Not surprisingly then, we are not the first to suggest modification of likelihood-based estimators (see e.g. White (1982), Royall and Tsou (2003), Kot lowski et al. (2010)) and posteriors (Royall and Tsou, 2003, Hoff and Wakefield, 2012, Doucet and Shephard, 2012, Müller, 2013. The latter three approaches (that extend the first) employ the so-called sandwich posterior, in which the covariance matrix of the posterior is changed based on a 'sandwich formula' involving the empirical variance; Müller (2013) provides extensive explanation and experimentation. Compared to the sandwich approach, our proposal, besides being applicable in fully nonparametric contexts, seems substantially more radical. This can be seen from the regression applications in Müller (2013), which involve a noninformative Jeffreys' prior on the regression coefficient vector β. With such a prior (as well as any normal prior scaled by variance σ 2 ), the posterior mean of β, and thus also the frequentist square-risk (which only depends on the posterior mean) remains unaffected by the sandwich modification, so for square-risk the method would perform like standard Bayes in our model-wrong experiments. Thus (Müller, 2013, Section 2.4) demonstrates its usefulness on other loss functions. Nevertheless, both the sandwich and the safe Bayesian methods can be thought of as methods for measuring the spread of a posterior, and it would be useful to compare the two in detail, both in theory and practice.

Future Work and Open Problems
The results of this paper raise several issues and prompt the following research agenda: 1. The misspecification in our example would presumably be easily spotted in practice.
This raises the question whether 'bad' misspecification also arises for data sets that occur in practice and for which it would not be easily spotted. Currently, we know only of one experiment in this direction: Jansen (2013) applied the Bayesian Lasso (Park and Casella, 2008) to several real-world data sets, where the λ (i.e. 1/η) is taken that minimizes the cumulative square-loss whereas at the same time σ 2 is a free parameter. Thus it is a hybrid of I-square Safe Bayes and I-log SafeBayes, but equal to neither; the method was (somewhat) outperformed by standard Bayes on most data sets tried. However, we also tried this hybrid method in the model-wrong experiment of this paper and found that it is not competitive with either of the two 'true' in-model SafeBayes methods either; so the experiment does not 'really' test SafeBayes; more precise experiments are needed. 2. Our method has one major disadvantage: even if the data do not have a natural ordering, theη selected by SafeBayes will, in general, be order-dependent. Grünwald (2011) suggested a very different (and in fact, the first) method to learnη, that does not have this problem. However, it is only applicable to countable models, and has no obvious computationally efficient implementation, so we do not know whether it has a future. Another method that is clearly related to I-square SafeBayes is to determine η using leave-one-out cross-validation based on the squared error. This method is also order-independent and behaves comparably to I-square SafeBayes (Appendix A.1), but it is not clear how to extend it to general misspecified models, While we show in the same appendix that cross-validation based on log-loss of the Bayes predictive distribution fails dramatically, it may be that cross-validation based on log-loss of the Bayes posterior mean would generally work fine, and this method can be applied to general misspecified models, not just linear ones. Compared to I-log-SafeBayes this in-model log-loss cross-validation would have the advantage that it is order independent, and the disadvantage that it cannot (at least not straightforwardly) be used in an online setting and/or for non-i.i.d. models. Also, we suspect that if the number of models is exponential in the covariates (as in variable selection), cross-validation may be prone to overfitting whereas SafeBayes would not be, but this is just extrapolation from the well-specified case: it would be useful to investigate "in-model cross-validation" further. 3. What exactly are relations between the sandwich posterior (see above) and our approach? It would be good to test SafeBayes on the data sets used by Müller (2013). 4. It would be useful to establish exactly what properties of Bayesian updating remain valid for generalized Bayesian updating, and what properties do not hold any more. For example, telescoping (Cesa-Bianchi and Lugosi, 2006) holds for the standard posterior, for the η-flattened, η-generalized posterior, but not for the (nonflattened) η-generalized posterior. 5. As discussed at the end of Section 6.5, the final term in (23) is lacking in the in-model versions of SafeBayes, and this does suggest that they should work better than the randomization versions -the corresponding ∆ η,η is always smaller. Yet we have no theoretical results to this end, and our empirical results in this paper confirm this to some extent (R-square-SafeBayes is not competitive), but not fully (R-log-SafeBayes is competitive), so more research is needed here.
6. As we indicated in Section 6.3, hypercompression implies nonconcentration, but we do not know whether the reverse implication holds as well, so we may perhaps have bad misspecification yet no hypercompression. It would give significant insight if we knew whether this indeed could happen. 7. In light of the discussion underneath (44), one would like to formulate a general theory of substitution likelihoods so that likelihoods can be determined based on the inference task of interest, so that this task becomes KL-associated, for arbitrary prediction tasks. Ideally, (44) and approaches such as pseudo-likelihood and rank-based likelihood would all become a special case. If this can be done, we would have a truly generalized Bayesian method.

A Experiments on Variations of the Prior and the Model
Apart from the priors on parameters given the models we used in our main experiments, we tried several alternative prior distributions, described in the subsections below. The first subsection describes experiments with fixed (i.e., a degenerate prior on) σ 2 .

A.1 Experiments with Fixed σ 2
When models with fixed σ 2 are used, our two SafeBayes methods become R-square-and I-square-SafeBayes, as defined in Section 4.2. These also have a direct interpretation as trying to find the best η for predicting with a square-loss function, as was explained in that section. In this context, the value η = 1 has no special status, so we now also tried values η > 1 (we did experiment with varying η in the previous varying σ 2 experiments as well, but there it did not make any substantial difference in the results). Specifically, we set S n in the Safe Bayesian algorithm to {2 κmax , 2 κmax−κstep , 2 κmax−2κstep , . . . , 2 −κmax }, with κ step = 1/2 and κ max = 6. All priors on the regression coefficients β remain as described in Section 5. 1.

A.1.1 Model Averaging Experiment, Fixed σ 2
The model-correct experiment showed no surprises (all methods performed well), so we only show results for the model-wrong experiment, as described in Section 5.1, testing each of Bayes, R-square-and I-square-SafeBayes twice: once based on a model with variance σ 2 overly large (3 timesσ 2 ), and once with σ 2 overly small (1/3 timesσ 2 ) variance. To allow precise comparison with the results in the main text, we also show behavior of R-log-SafeBayes with varying variance (defined precisely as in Figure 3) in Figure 14.

A.1.2 Ridge Regression Experiments, Fixed σ 2
Again we only show results for the model-wrong experiment.
Note that here standard Bayes -as can be seen from plugging η = 1 into (12) -does not depend on σ 2 and thus coincides in terms of square-risk behavior with standard Bayes in the variable σ 2 case as in Figure 7. Also (see below (12)) I-square-SafeBayes for fixed σ 2 does not itself depend on σ 2 and simply minimizes the cumulative sum of squared errors.
Just as for ridge regression with variable σ 2 , one may equivalently interpret the ηgeneralized-posterior meansβ i,η as the standard, nongeneralized Bayesian posterior means that one would get with a modified prior on β, proportional to the original prior raised to the power η −1 (see above (31), Section 5.4). It may then once again seem reasonable to learn η itself in a Bayesian-or likelihood-based way such as empirical Bayes. 5 Indeed, this was suggested implicitly as early as 1999 by one of us (Grünwald, 1999). The procedure described in Section 3.4.3 ('hierarchical loss') of Bissiri et al. (2013) also arrives, via a different derivation, at a similar prescription for finding η (we immediately add that the authors describe many ways for determining η, of which this is just one). Unfortunately, just as for the empirical Bayes learning of η with varying σ 2 , the figures below indicate that it does not perform well at all.  Figure 3 with p max = 50. The second graph is a scaled version of the first. Since fixed σ 2 implies fixed overconfidence ratio, the overconfidence graph is not shown. For clarity in the η-graph we do not show standard deviations of the η's.
Conclusion Standard Bayes again performs comparably badly in both experiments (note the difference in scale in the first graphs of Figure 14 and 15). I-square-SafeBayes behaves excellently in both experiments. But now in the ridge experiment R-square-SafeBayes becomes a highly problematic method for small samples, worse even than standard Bayes. The reason is its dependence on the specified σ 2 as can be clearly seen from (23). If σ 2 was set to be much larger than the actual average prediction error on the sample, then the third term in (23) dominates. This term decreases with η and thus automatically pushesη 'upward' by an arbitrary amount. The term also decreases with n, so that the problem disappears at a large enough sample size. The problem did not occur in the model averaging experiment; we suspect that this is because in this experiment, there is substantial prior mass on a small model p = 4) containing the pseudo-truth, and for this submodel, the final term in (23) (which is approximately linear in p) is much smaller than for p = 50 and does have not such a strong influence.

A.2 Slightly Informative Prior
Again we only consider model-wrong experiments. Within each model, we now use the following prior parameters:β 0 = 0 and Σ 0 = 10 3 I for the multivariate normal distribution on β; and a 0 = 1 and b 0 = σ * 2 a 0 (as before) for the inverse gamma distribution on σ 2 (where σ * 2 is the true variance of noise in our data, as defined in Section 5. 1.2). We repeated the model-wrong experiment of Section 5.3 with p max = 50 with this slightly informative prior and obtained similar results to those obtained using our original informative prior with Σ 0 = I: Bayes performs badly roughly between samples 90 and 130 and has some risk spikes before that so that its overall performance is comparable to before, while R-SafeBayes and I-SafeBayes both obtain good risks.
We also repeated the model-wrong experiment for ridge regression (Section 5.4). Here the effect of the new prior on Bayes' performance is similar: the square-risk peaks at a larger value, but in a smaller range of sample sizes. However, the effect of changing the learning rate is different in this experiment than what we have seen before: here one can take η very small and still get good results. So in a sense, the problematic behavior of Bayes has a trivial solution here: just pick a very small but fixed η. R-log-SafeBayes was too conservative in this, I-log-SafeBayes did fine. R-log-SafeBayes became competitive again however, if we used the discounting version described in Section B.1 below.
We omit the pictures corresponding to model selection/averaging (Section 5.3) as they show no surprises; but in Figure 16 we do repeat the pictures for ridge regression (Section 5.4), because they do give additional insight: Note that the phenomenon is now much more 'temporary'. In the beginning, it seems that there is a sort of cancellation between the influence of the irrelevant variables and standard Bayes behaves fine. However, if we increase the number of irrelevant variables, the problem (while starting at a later sample) takes longer to recover from.

A.3 Prior as advised by Raftery et al.
In Raftery et al. (1997), some guidelines for choosing priors in regression models are given. Lettingβ 0 denote the prior mean, one of their recommendations is that the prior densities for β =β 0 and β =β 0 + 1 should differ by a factor of at most √ 10. The prior density on β marginalized over σ 2 follows a multivariate t-distribution, and the factor in question varies with the dimensionality of β, so that models of larger order are given less informative priors. In our case, we find that the resulting prior is always less informative than our original prior, and for model M 10 and above (i.e. β of dimension 11 or larger), it becomes even less informative than the prior introduced in the previous section.
For the prior on σ 2 , Raftery et al. advise that the density should vary by no more than a factor 10 in a region of σ 2 from some small value to the sample variance of y. For our choice of hyperparameters a 0 = 1, b 0 = 1/40, the mode of π(σ 2 ) is at b 0 /(a 0 + 1) = 1/80, and the density is within a factor 10 of this maximum in the approximate region (0.0037, 0.0941). For the correct model experiments, the actual variance of Y is 0.065; for the wrong model experiments, it is 0.045 (with a larger variance for 'good' points and zero variance for 'easy' points). For both experiments, the factor-10 condition holds between Var(Y )/12 and Var(Y ). We conclude that this prior satisfies the guidelines in Raftery et al. quite well.
We will refer to the prior described above as Raftery's prior (even though it is really a different prior for each model order). Using this prior, we found the following experimental results.
In the model-wrong experiment with model selection/averaging (Section 5.3) with our original prior replaced by Raftery's prior, Bayes performs somewhat better than R-log-SafeBayes (except on very small sample sizes). However, I-log-SafeBayes performs as well as Bayes, and so does the R-log-SafeBayes variant that discounts half of the initial sample when Figure 16: Top two graphs: square-risk for two different ridge experiments. In both experiments the slightly informative prior of Section A.2 is used. In the first experiment p = 50; in the second p = 100; otherwise the experiments are just as the 'wrong model experiment' of Section 5.4, Figure 7, but we also included performance of I-square-SafeBayes. Final graph shows self-confidence for the p = 100 case for Bayes and SafeBayes, on a logarithmic scale because of the range of values involved.
choosing the learning rate (see Section B.1).
This might suggest that Raftery's prior could be used to accomplish the same kind of safety against wrong models as SafeBayes provides, at least in a model selection context. To test this, another experiment was performed where the fraction of 'easy' points was increased to 75%. In this experiment, the misbehavior of Bayes seen in Section 5.3 returned worse than before, with risks a factor 20 larger than before, whereas the SafeBayes methods continued to work fine. This suggests that Raftery's prior can not be relied on if the severeness of misspecification is unknown.
If Raftery's prior is used for model selection with a correct model, Bayes and the SafeBayes variants perform well, and very similarly to each other.
For ridge regression, the results with Raftery's prior for both the correct and the incorrect model experiment are very similar to those with the slightly informative prior, except that the peak in the risks is higher for all methods.

A.4 The g-prior
Another prior we experimented with was the g-prior, a popular choice in model selection contexts (Zellner, 1986, Liang et al., 2008. For all definitions we refer to the latter paper. In contrast to all other priors we considered, the g-prior depends on the design matrix X T n X n , and hence can only be used in settings where this matrix, and hence the eventual sample size of interest n, is given once and for all. For this reason, we decided to depict in Figure 17, for each value of n, the risk obtained when predicting the n-th data point with the posterior calculated from the g-prior corresponding to the first n covariates (x 1 , . . . , x n ) and observed data y n−1 . This is subtly different from our previous graphs (e.g. Figure 3-6) that show how the risk evolves as n increases in a single run of the experiment, averaged over 30 runs.
The graph is not shown starting at n = 0, because of another difference between the g-prior and the priors we used in other experiments: Because of the same design dependence, with the g-prior, the posterior on β remains a degenerate distribution on an initial segment of outcomes. For example, with M p for p = 50, the matrix X T n X n is singular until at least 50 different design vectors have been observed. For our model-wrong experiment, this means that on average, about a 100 observations are required before the posterior becomes nondegenerate; this explains why Figure 17 starts at n a little over a 100.
The experimental results clearly indicate that the g-prior does not deal with our data in a satisfactory way, regardless of the value of g. Of the values of g we tried (up to 10 4 ), g ≈ 100 (shown in the graph) yielded the smallest squared risk around sample size n = 200; for larger sample sizes, larger values of g were better, but only slightly. Furthermore, (as in fact we expected by analogy to learning η with Empirical Bayes), the value of g found by Empirical Bayes is not optimal for dealing with our data and only makes things worse: larger values of g (which put more weight on the data) would yield smaller risks.

B Experiments on Variations on the Method
Below we look at a number of other more or less promising alternative approaches to modifying standard Bayes. Figure 17: Risk as a function of sample size (starting at the first sample size at which the g-prior is defined) for model averaging and selection based on the g-prior in the model-wrong experiment of Figure 3 both with g = 100 and with g chosen by Empirical Bayes at each sample size

B.1 An Idea to be Explored Further: Discounting Initial Observations
Just like standard Bayes, all our SafeBayesian methods are, at heart, prequential (Dawid, 1984). All prequential methods suffer to a greater or lesser extent from the start-up problem (van Erven et al., 2007, Wong andClarke, 2004): sequential predictions based on a model M p may perform very badly for the first few samples. While they quickly recover when the sample size gets large, the behavior on the first few samples may dominate their cumulative prediction error for a while, leading to suboptimal choices for moderate n. We can address this issue in several ways. A very simple method to 'discount' initial observations, apparently first used (implicitly) to modify standard Bayes factors by (Lempers, 1971, Chapter 6), is to only look at the cumulative sequential prediction error on the second half of the sample, so that the first half of the sample merely functions as a 'warming-up' sample (Catoni, 2012). Without claiming that this is the 'right' method to discount initial observations, we experimented with it to see whether it can further improve the performance of SafeBayes; for simplicity, we concentrated on R-log-SafeBayes.
We found that in most experiments, this new method for determining η performed very similarly to the standard method based on the whole sample, sometimes slightly better and sometimes slightly worse, making it hard to say whether the new method is an improvement or not. Still, there are two experiments in which the new method performed substantially better, namely the experiments with less informative priors of Section A.2 and A. 3. Thus we cannot just dismiss the idea of fitting η based on only part of the data or more generally, discounting initial observations, and it would be interesting to explore this further in future work: of course taking half of the data is rather arbitrary, and better choices may be possible. In particular, we may try a variation of switching between η's analogously to the switch distribution (van Erven et al., 2007) to counter the startup problem.

B.2 Other Methods for Model Selection: AIC, BIC, (generalized) Cross-Validation
We tested the performance of several classic model selection methods on the same data and models as in our main model selection/averaging experiment, Section 5. 3. We associated with each model M p its standard (i.e. η = 1) Bayes predictive distribution under the prior described in Section 5.1 (these generally perform better than the maximum likelihood distributions based on M p whose use is more standard here). We then ran leave-oneout cross-validation, 10-fold cross-validation and GCV based on the predictions (posterior means/MAPsβ i,η ) made by these predictive distributions. We also compared the models Figure 18: Squared risk and selected model order for five different model selection methods. The risks in this graph are risks of single models selected by each method (similar to the MAP risks shown for Bayes and SafeBayes).
via AIC and BIC, where for AIC we used the small-sample correction of Hurvich and Tsai (1989). We see that AIC and generalized cross-validation have risks and selected model orders similar to those of standard Bayes, though they do not recover as well as Bayes when the sample size increases. Of the other three methods, BIC and 10-fold cross-validation find the optimal model and have smaller risks towards the end than leave-one-out cross-validation, which continues to select larger-than-optimal models with substantial probability. Note that none of the methods can compete with SafeBayes on sample sizes below 150: SafeBayes's risk goes down immediately after the start of the experiment while for all the other methods it goes up first. Also, SafeBayes finds the optimal model quickly without first trying much larger models.

B.3 Other Methods for Learning η: Cross-Validation on Log-Loss and on Squared Loss
As indicated in the introduction and Section 4.2, findingη by I-square-SafeBayes is somewhat similar to findingη by leave-one-out cross-validation with the squared-error loss, the difference being that I-square-SafeBayes finds the optimal η for predicting each point based on past data data points rather than the optimal η for predicting each point based on all other data points. Since the leave-one-out method is often employed in ridge regression, it seemed of interest to try out here as well. Figure 19 shows that LOO-cross validation indeed performs very similarly to R-log and I-square SafeBayes in terms of square-risk, but is consistently a bit worse in terms of self-confidence; we do not have a clear explanation for this phenomenon. Perhaps more interestingly, in Figure 20 we show what happens if we use LOO-cross validation based on the log-loss of the Bayes predictive distribution, which may seem a reasonable procedure from a 'likelihoodist' perspective. Here we see dismal behavior, the reason being the hypercompression phenomenon of Section 6.3: cross-validation will select a model that, at the given sample size, has small log-risk, but because of hypercompression this model can sometimes perform very badly in terms of all the associated prediction tasks such as square-risk and reliability.

C Experiments on Variations of the Truth
Other Distributions of Covariates In all experiments described in Section 5 and the previous appendices, the covariates (X i1 , X i2 , . . .) where sampled independently from a 0mean multivariate Gaussian. We repeated most of our experiments with X i1 , X i2 , . . . that were sampled independently uniformly from [−1, 1], and, as already indicated in the introduction, with polynomials, X ij = S j i for S i ∈ [−1, 1] uniform. This did not change the results in any substantial way, so we do not report on it further.
Fewer Easy and 'Less-Easy' Points If the fraction of 'easy' points is reduced, one would expect the performance of standard Bayes to improve. This is confirmed by an experiment where each data point had a probability of only 1/4 to be (0, 0). Here Bayes still has some trouble finding the optimal model, but the square-risk, MAP model order, and time taken to recover are all much reduced compared to the original experiment in Section 5.3 where half the data points were 'easy'. SafeBayes on the other hand showed the same good performance as before.
Two points that might be raised against the use of 'easy' points in our simulations are that they are unlikely to occur in practice, and that if they were to occur, they would be easily detected and dealt with another way. To address this line of argument to some extent, another experiment was performed with a smaller contrast between 'easy' and 'hard' points. Rather than being identically (0, 0), the 'easy' points were random but with smaller variance than the 'hard' points. To be precise, the covariates and noise were both a factor 5 smaller (so that their variances were 25 times smaller). In this experiment, the same phenomena as in Section 5.3 occurred, albeit again on a smaller scale (though larger than in the previous, 1/4-easy experiment).

Different Optimal Regression Functions
We experimented with a number of variations of the wrong-model experiment of Section 5.3, by changing the underlying 'true' distribution P * . In each variation, we still tossed, at each i, an independent biased coin to determine whether i would be 'easy' (still probability 1/2) or 'regular' (probability 1/2), but in each case we changed the definition of either the 'easy' or the 'regular' instances or both. In all experiments, for the 'regular' instances, only P * (Y i | X i ) was changed; the marginal distribution of the X i was still multivariate normal as before. Here is a list of things we tried: We explain each in turn. For the first experiment, all the results were comparable to the results of Experiment 1 in Section 5, so we do not list them. For the second experiment, the risks obtained by standard Bayes and SafeBayes were similar to each other. The model order behaviors were similar to what they were before (with standard Bayes selecting large model orders initially), but all methods recovered much more quickly, converging on the optimal model shortly after n = 50; presumably this could happen because now the optimal coefficients were substantially larger than the standard deviation in the data.
The third experiment was included to see whether there would be an effect if the 'easy' points would be placed at an arbitrary point rather than the special, fully symmetric (0, 0). We added the intercept −0.04 so as to make sure that, for the data we actually observe, E X,Y ∼P * [Y i ] = (1/2).04 − (1//2).04 = 0; thus the Y -values will appear centered around 0, which is standard both in frequentist and Bayesian approaches to regression (for example, both Raftery et al. (1997) and Hastie et al. (2001) preprocess the data so that n i=1 Y i = 0). Again, we discerned no difference in the results so did not include any further details.
Finally, the fourth experiment was included just to see what happens if, contrary to standard methodology, we apply the method to Y i that are not (even approximately) centered. In this experiment, standard Bayes did not converge to the optimal model until after n = 150 as in the experiment of Section 5.3, but its risk and selected model orders were both smaller. The versions of SafeBayes worked well as before.

D.1 Implementing SafeBayes
To implement the Safe Bayesian algorithm (page 13), generalized posteriors must be computed for different values of η, and the randomized loss (18) must be computed for each sample size. For linear models with conjugate priors as considered in our experiments, all required quantities can be computed analytically. We have already seen how to do this for models M p with fixed dimension p. For unions of such models, it turns out that the mix-loss is a helpful tool.
Here ∝ means 'proportional to' when p is varied and z n and η are fixed. In practice we prefer to calculate this quantity incrementally: the posterior for z n+1 with prior Π is equal to the posterior for a single data point z n+1 when the posterior for z n is used as prior (in this sense the generalized posterior behaves like the standard posterior): using this to further rewrite the second line of (46) gives π(p | z n , η) ∝ (f (y n | x n , τ, p)) η π(τ | p) dτ π(p) = (f (y n | x n , τ, p)) η · (f (y n−1 | x n−1 , τ, p)) η π(τ | p) dτ π(p) = (f (y n | x n , τ, p)) η · π(τ | z n−1 , p, η) · (f (y n−1 | x n−1 , τ ) η π(τ | p)dτ dτ π(p) ∝ (f (y n | x n , τ, p)) η · π(τ | z n−1 , p, η) dτ · π(p | z n−1 , η), where in the third inequality we used the definition of the generalized posterior and in the last we used (46). The integral appearing in both the cumulative and the step-wise expression equals the expectation in (40) from the η-flattened η-generalized Bayesian predictive density for n and 1 outcome respectively; − log[(·) 1/η ] of this quantity is the mix loss of model p. We will now derive formulas for this quantity.

D.2 Belief in Concentration (proof of Theorem 1)
For simplicity, we only give the proof for the unconditional case, in which the θ represent distributions P θ on z ∈ Z; extension to the conditional case is straightforward. For 0 < η < 1, let d η (θ * θ) denote the Rènyi divergence of order 1−η (Van Erven and Harremoës, We first state a lemma, proved further below. In the lemma, as in the remainder of the proof, (θ * , Z n ) is the random variable distributed according to the Bayesian distribution Π.