Robust heavy-tailed versions of generalized linear models with applications in actuarial science

Generalized linear models (GLMs) form one of the most popular classes of models in statistics. The gamma variant is used, for instance, in actuarial science for the modelling of claim amounts in insurance. A flaw of GLMs is that they are not robust against outliers (i.e., against erroneous or extreme data points). A difference in trends in the bulk of the data and the outliers thus yields skewed inference and predictions. To address this problem, robust methods have been introduced. The most commonly applied robust method is frequentist and consists in an estimator which is derived from a modification of the derivative of the log-likelihood. We propose an alternative approach which is modelling-based and thus fundamentally different. It allows for an understanding and interpretation of the modelling, and it can be applied for both frequentist and Bayesian statistical analyses. The approach possesses appealing theoretical and empirical properties.


Introduction 1.Generalized linear models
Generalized linear models (GLMs) form a class of regression models introduced by Nelder and Wedderburn (1972) which encompasses among the most widely used statistical models, with applications ranging from actuarial science (Goldburd et al., 2016) to medicine (Lindsey and Jones, 1998).This class generalizes normal linear regression, that is linear regression with normally distributed errors, by assuming that the distribution of the dependent variable defines an exponential family with parameters that depend on explanatory variables, and an expectation that is linear in the explanatory variables, up to a transformation.As a result, GLMs can handle both discrete and continuous dependent variables, with distribution shapes that offer flexibility regarding in particular the skewness.GLMs cover models such as, as mentioned, the classical linear regression for normally distributed responses, logistic regression for binary ones, Poisson regression for count data, gamma regression for right-skewed positive data, plus many other statistical models obtained through its general model formulation.
The use of GLMs in actuarial science can be traced back to the early 1980s with examples of the fitting of GLMs to motor-insurance data (McCullagh and Nelder, 1983).In the insurance industry, the use of gamma GLM is popular for the modelling of claim severity due to the similar characteristics of the probability density function (PDF) with its empirical version.The latter indeed often exhibits skewness and is strictly increasing up to a positive real value, after which it strictly decreases.Gamma GLM is used to determine the factors that contribute the most to the claim size and how they influence the latter, and to predict claims based on a given set of explanatory variables, ultimately leading to the pricing of insurance products.

Robustness problems
In this paper, we study the robustness problems of GLMs against outliers and propose a robust version.
In Figure 1, we show how these problems translate in practice with gamma GLM by illustrating the evolution of parameter estimates as a data point moves away from the bulk of the data.We simulated a data set of size n = 20 with one explanatory variable whose data points x i2 are a standardized version of 1, . . ., n (x i1 = 1 for all i to introduce an intercept in the model).First, each observation of the dependent variable y i was sampled from a gamma distribution with a mean parameter of µ i = exp(x T i β), where β = (β 1 = 0, β 2 = 1) T , and a parameter ν = 40 corresponding to the inverse of the dispersion parameter.Next, we gradually increased the value of y n from 6 (a non-outlying value) to 15 (a clearly outlying value).For each data set associated with a different value of y n , we estimated the parameters ν, β 1 and β 2 of gamma GLM based on maximum likelihood method.In Figure 1, we also show the estimates based on the proposed method, and those based on the method of Cantoni and Ronchetti (2001), the latter being the most commonly applied robust method.In the following, we will present the details of both robust methods.In Figure 1, we observe that outliers can arbitrarily affect the maximum likelihood estimation of gamma GLM.Note that n = 20 and one explanatory variable (thus three parameters) together represent a situation where the sample size is moderate comparatively to the number of parameters.Estimates of ν, β 1 and β 2 as a function of y n based on the method of Cantoni and Ronchetti (2001), gamma GLM and the proposed method (with c = 1.6); the black horizontal lines represent the maximum likelihood estimates of gamma GLM based on the data set excluding the outlier.
To have a concrete idea of how and why those robustness problems arise, let us discuss the example of the data bases of insurance companies and the potential impact of their use of gamma GLM.Insurance companies are not shielded from issues such as data quality, and thus the presence of erroneous data points in their data bases.Also, extreme data points are often present in their data bases.Both have a negative impact on the conclusions drawn and predictions made from statistical analyses.This is due to the non-robustness of the regression models typically employed, such as gamma GLM, against outliers, and therefore against data with gross errors and extreme data points.When using gamma GLM, the non-robustness is a consequence of the light tails of the PDF, combined with a difference in the trends in the bulk of the data and the outliers.When the likelihood function is evaluated at parameter values reflecting the trend in the bulk of the data, the light tails penalize heavily those values for the outliers, diminishing significantly the likelihood-function value.The analogous phenomenon arises when the likelihood function is evaluated at parameter values reflecting the trend in the outliers: those values are heavily penalized for the bulk of the data.All that makes values between those aforementioned more plausible, representing an undesirable compromise.The resulting maximum likelihood estimate (MLE) thus reflects neither the bulk of the data nor the outliers.

Motivation for robust approaches, existing approaches and our proposal
Another undesirable consequence of those robustness problems is that identifying outliers using standard measures based on GLM maximum likelihood estimation such as Pearson residuals (which will be defined in the following) may be ineffective.This is a result of the masking effect (Hadi and Simonoff, 1993), as outliers may mask one another due to an adjustment of the model.Moreover, univariate analyses of extreme values may not allow to deal with the problem, because the notion of outlier here is with respect to the model employed, that is a combination of x i and y i which is unlikely under this model when using parameter values reflecting the trend in the bulk of the data.A data point can thus be an outlier with respect to the model without having any extreme values in the explanatory or dependent variables.There is an additional difficulty in situations where there are many explanatory variables and one wants to perform variable selection, because going through univariate and Pearson-residual analyses (assuming that these are effective) is simply infeasible.
All that motivates the use of robust GLMs which can automatically and effectively deal with outliers and thus offer a protection in case the data set to be analysed contains outliers.The non-robustness properties of GLMs have been studied by several authors which have proposed robust alternatives.On the frequentist side, with a focus on logistic regression, Pregibon (1982) proposed a resistant fitting method, and Stefanski et al. (1986) and Künsch et al. (1989) studied optimally bounded score functions.Cantoni and Ronchetti (2001) proposed robust estimators for all GLMs based on the notion of quasi-likelihood; their approach is to adapt the robust M-estimator for linear regression of Huber (1973).More recently, Beath (2018b) proposed a robust alternative based on a mixture where the main component is a standard GLM, while the other, dedicated to outliers, is an over-dispersed GLM achieved by including a random-effect term in the linear predictor.This approach has the advantage of being generally applicable and to be modelling-based, implying that it can be used for both frequentist and Bayesian statistical analyses.Random-effect models are however known for being difficult to estimate; the estimation procedure is sometimes ineffective or quite slow.This is for instance the case in the simulation study conducted in this paper.The estimation based on a single data set is too long and makes the inclusion of the method in the simulation study infeasible.
On the Bayesian side, except the approach of Beath (2018b) that can be used for Bayesian analyses, we did not find any robust approach specifically for GLMs.A general approach is that of Bissiri et al. (2016) who introduced another statistical paradigm based on the premise that the model assumed is incorrect, but we consider it as another type of approaches to those considered here and will not focus on such an approach in the current document.Bayesian robust approaches typically consist in adapting the original model to the potential presence of outliers by replacing the distribution by one that is similar, but with heavier tails.
Frequentist and Bayesian typical robust methods are often seen as being fundamentally different.In former methods, the likelihood function is modified (yielding M-estimators) for the purpose of diminishing the impact of outliers, whereas in the latter, the original PDF is directly replaced by another density which, while being as similar as possible to the original one, has heavier tails.We highlight in Section 2 that a connection exists in some situations by viewing M-estimation as maximum likelihood estimation of a heavy-tailed model.Establishing such a connection provides another perspective on M-estimators.Firstly, it allows to associate a model to the estimator, which is important from a modelling point of view.Secondly, it allows the method to be used for Bayesian analyses.In Section 2, we also highlight that, with the approach of Cantoni and Ronchetti (2001), it is not possible to establish a clear correspondence with a model because the function that they modify to gain in robustness is the derivative of the log-likelihood, instead of the log-likelihood.In Section 3, we present our approach in the form of a heavy-tailed distribution whose maximum likelihood estimation can be seen as M-estimation of the gamma GLM parameters.We focus on the case of gamma GLM throughout the document, but our approach is valid for any GLM based on a distribution with tails, whether it is continuous or discrete, such as inverse Gaussian and Poisson GLMs.The case of inverse Gaussian is treated in detail in Appendix C. In Section 3, we also present theoretical results which allow to characterize the proposed model.We present sufficient conditions under which the posterior distribution for a Bayesian analysis is proper.We study the robustness properties by characterizing the behaviour of the likelihood function and posterior distribution as outliers move away from the bulk of the data.In Section 4, we evaluate the performance of the proposed approach through a simulation study and present a Bayesian case study based on a detailed analysis of a real data set.The paper finishes in Section 5 with a discussion and retrospective comments.All proofs of theoretical results are deferred to Appendix A. The code to reproduce all numerical results is available online (see ancillary files on https://arxiv.org/abs/2305.13462).

Connection between robust estimators and heavy-tailed distributions
In Section 2.1, we take the Huber M-estimator in the context of linear regression (Huber, 1973) as an example to highlight that a clear connection between an M-estimator and a heavy-tailed distribution exists in some situations.In Section 2.2, we return to the context of gamma GLM, and explain that several PDFs yield the modified estimating equation proposed in Cantoni and Ronchetti (2001).

One-to-one correspondence: Huber M-estimator in linear regression
Consider that we have access to a data set of the form {x i , y i } n i=1 , where x 1 , . . ., x n ∈ R p are vectors of explanatory-variable data points and y 1 , . . ., y n ∈ R are observations of the dependent variable.In linear regression, the random variables Y 1 , . . ., Y n are assumed to be independent (or conditionally independent under a Bayesian framework) and modelled as where β = (β 1 , . . ., β p ) T ∈ R p is the vector of regression coefficients, σ > 0 is a scale parameter, and ϵ 1 , . . ., ϵ n ∈ R are standardized errors, which are assumed to be independent and identically distributed with ϵ i ∼ f .In the normal linear regression model, f = N(0, 1).To find the MLE, we maximize the log-likelihood function, denoted by ℓ, which is such that where we wrote f (ϵ) = g(ϵ)/m with g(ϵ) = exp(−ϵ 2 /2) and m = √ 2π, the normalizing constant.Maximizing this function is equivalent to minimizing if we omit the constant term.The quadratic term above produces extreme values when some residuals y i − x T i β are extreme, which is the case for outliers when the log-likelihood function is evaluated at parameter values reflecting the trend in the bulk of the data.The idea of Huber (1973) was to modify the quadratic term to deal with this problem.He proposed to instead minimize with ϱ being the Huber loss function (Huber, 1964), defined as where k > 0 is a tuning parameter chosen by the user to reach a compromise between efficiency and robustness. 1 The penalization by the Huber loss function is quadratic, as before, between −k and k; otherwise, the penalization is linear, which is more moderate.Note that the term −k 2 /2 in (3) is to make ϱ continuous.
Replacing f in the log-likelihood function yields an estimator called a maximum likelihood type estimator (M-estimator).To establish a connection with a heavy-tailed distribution, we instead view it as the MLE of another model.In our example above, it is like viewing the minimization of (2) as being equivalent as the maximization of (1), but with The function to minimize with the Huber M-estimator can thus be viewed as being associated to a likelihood function, where the PDF f involved in it has a central part which is proportional to a standard normal density and tails that behave like exp(−k|ϵ|).The tails are thus similar to those of a Laplace density and are thus heavier than the tails of a normal density.
The Huber M-estimator is a proper example for which a clear correspondence with a heavy-tailed distribution exists.However, that is not the case for all M-estimators.For instance, it is not possible to establish a connection with a model for the Tukey's biweight M-estimator (Beaton and Tukey, 1974) as the loss function is constant beyond a threshold, thus yielding an improper distribution.Cantoni and Ronchetti (2001) In the GLM context, we will use, for this section and the rest of the paper, the same notation as in Section 2.1 for the explanatory-variable data points (i.e., x 1 , . . ., x n ), the observations of the dependent variable (i.e., y 1 , . . ., y n ), and the regression coefficients (i.e., β = (β 1 , . . ., β p ) T ).In the case of gamma GLM, there is an additional parameter, ν > 0, that corresponds to the shape parameter, in addition to the inverse of the dispersion parameter.Note that y i > 0 here as the dependent variable is positive.

The case of
The robust estimator of β proposed by Cantoni and Ronchetti (2001) corresponds to the solution of the following estimating equation: where with and , sgn( • ) being the sign function and c > 0 a tuning parameter chosen by the user to reach a compromise between efficiency and robustness.2In (4), we set the weight function w included in Ψ in Cantoni and Ronchetti (2001) and applied to each x i to 1 and omitted a Fisher consistency term to simplify.Note that µ i corresponds to the mean parameter in gamma GLM and is thus such that µ i = exp(x T i β) when using the log link, which will be used throughout.Note also that var[Y i ] corresponds to the variance with gamma GLM and is thus such that var becomes what is referred to as the Pearson residual when evaluated at β and ν.
Placed in a context of M-estimator, where one wants to minimize n i=1 −ρ(y i , x i , β, ν) with −ρ being a loss function different from minus the log of the density, Ψ can be viewed as the partial derivative of ρ with respect to β.If we set ρ as follows: with possibly an omitted additive term common to all cases independent of β, ℓ i being the contribution of the i-th data point to the log-likelihood in gamma GLM, that is it can be readily verified that the derivative of ρ(y i , x i , β, ν) (5) with respect to β is equal to Ψ(y i , x i , β, ν) (4).The terms a 1 (ν) and a 2 (ν) can be used to make a corresponding PDF continuous.There are thus many possible loss functions (5) and corresponding PDFs that can result in the estimating equation ( 4), because h(y i ) which does not depend on β has no influence on the derivative of ρ with respect to β.In an attempt to establish a clear connection between the loss function in (5) and a specific heavy-tailed distribution, and to take the analysis one step further, we consider a natural choice for h, as we now explain.
Let us have a look at the behaviour of the right tail of the gamma PDF, as it is the one that creates the most serious robustness problems because of its exponential decay.When y i → ∞, with β and ν fixed, the dominant term of ℓ i (β, ν) in gamma GLM is To retrieve a similar form in the function (5) when r i (β, ν) > c (which is the part of the function that is activated when y i is large, and β and ν are fixed), h should be set to the log function.With this function, the PDF f β,ν,c of the dependent variable based on ρ in ( 5) is such that (when the estimator is viewed as the MLE of a different model instead of an M-estimator) where f ν,c is the PDF of Y i /µ i which does not depend on β and g ν,c is the unnormalized version of the latter defined as Let us now discuss the characteristics of the function g ν,c .The left part (the third case in (6)) may not exist (in the sense of never be activated): given that z > 0, the left part exists when −c/ √ ν + 1 > 0, which is equivalent to c < √ ν.This means that the left part may exist even when ν ≤ 1 which is counterproductive given that in this case the original gamma PDF does not converge to 0 as z → 0 (it converges to a constant when ν = 1 and goes to infinity when ν < 1); the gamma PDF has, in a sense, no left tail in this case.Note that the analogous fact is true regarding the estimator of Cantoni and Ronchetti (2001) (recall (4)), meaning that it may be the case that Ψ(y i , In order to understand clearly the difference with gamma GLM in terms of tail behaviour, let us have a close look at the two tails of g ν,c separately.We compare the latter with a gamma PDF with a mean parameter of 1, corresponding to the central part of g ν,c .On the right side, when z → ∞, with ν fixed, the dominant term of the gamma PDF is exp(−νz); the decrease is thus exponential and faster than the polynomial decrease of g ν,c .On the left side, when z → 0, both PDFs have essentially the same behaviour.When 0 < ν < 1, the dominant term of the gamma PDF, which is z ν−1 , increases polynomially, and it is the same for g ν,c (we can show that when 0 < ν < 1, c √ ν − 1 < 0 when the left part exists).When ν > 1, the gamma PDF decays polynomially when z → 0 and it is the same for g ν,c (at least when c ≥ 1).
There are three flaws with the model f β,ν,c presented above and the associated M-estimator: i) its central part does not match that of a gamma PDF, but is proportional to it, which may negatively affect the efficiency in the absence of outliers, ii) the left part may exist when not useful, and iii) the tail decay is arguably not slow enough (we return to this point in Section 3), which may provide an explanation for the bounded, but not redescending, influence of outliers that was observed in Figure 1.In Section 3, we propose a robust alternative that is similar in essence to f β,ν,c but does not have those three flaws.

Robust heavy-tailed versions of GLMs
In this section, we present our proposal to gain in robustness in statistical analyses based on GLMs.We start in Section 3.1 with an alternative model definition and next discuss theoretical properties characterizing the approach in Section 3.2.

Model definition
Our proposal is rooted in a line of research called resolution of conflict that studies how conflicting sources of information are dealt with by Bayesian models.In this line of research, an outlier is seen as a source of information that is in conflict with others.The sources with which it is in conflict represent, among others, the non-outliers.Here, we consider that the prior distribution (in a Bayesian analysis) is not in conflict with the non-outliers to simplify.That line of research was started by de Finetti (1961) with a first analysis in Lindley (1968), followed by an introduction of a formal theory in Dawid (1973), Hill (1974) andO'Hagan (1979).For a review of Bayesian heavy-tailed models and conflict resolution, see O' Hagan and Pericchi (2012).In the latter paper, it is noted that there exists a gap between the models formally covered by the theory of conflict resolution and models commonly used in practice.The present paper contributes to the expansion of the theory of conflict resolution by covering models used in practice, namely GLMs.
The reason why that gap exists is because it is notoriously difficult to study models from a point of view of conflict resolution, even simple location-scale models; see, e.g., O'Hagan (1979), Desgagné and Angers (2007), Andrade and O'Hagan (2011), Desgagné (2013) and Desgagné (2015).The work of Desgagné (2015) introduced an analysis technique and paved the way to the studying of more complex models, like linear regressions (Desgagné and Gagnon, 2019;Gagnon et al., 2020Gagnon et al., , 2021;;Gagnon, 2023;Gagnon and Hayashi, 2023;Hamura et al., 2022Hamura et al., , 2023)), and Poisson and negative binomial regressions (Hamura et al., 2021).The work of Desgagné (2015) also showed that polynomial tails are not heavy enough to yield a desirable property called whole robustness (which is defined precisely in Section 3.2), at least for the location-scale model; the same was shown to be true in linear regression in Gagnon and Hayashi (2023).Desgagné (2015) proved that, for a location-scale model, it is sufficient to assume that the PDF has tails which are log-regularly varying, a concept introduced in that paper.The author proposed a PDF which satisfies this condition; it is called the log-Pareto-tailed normal (LPTN) distribution as the central part of this continuous PDF coincides with that of the standard normal and the tails are log-Pareto, meaning that they behave like (1/|z|)(1/ log |z|) λ with λ > 1.This approach was subsequently adapted to the context of linear regression by Gagnon et al. (2020), where the error distribution is assumed to be LPTN instead of normal, and whole robustness was shown to hold.
With this work, we take one step further by adapting the approach to GLMs: the distribution of the dependent variable is a modified version where the central part is kept as is, while the extremities are replaced by log-Pareto tails.Focusing on gamma GLM, we assume that Y i ∼ f β,ν,c with Y i /µ i ∼ f ν,c (we use the same notation as in Section 2.2 to simplify), where the proposed PDF f ν,c is defined as where z r , λ r , z l and λ l are functions of ν > 0 and c > 0 given by , and , with Z ν being a random variable following a gamma distribution whose mean and shape parameters are 1 and ν, respectively.
We now make a few remarks about the model.First, z r > 1 and thus the log terms in f right are positive.Also, f left is activated for some value of z when z l > 0, that is when c < √ ν and ν > 1, and z l is upper bounded by 1.This implies that both log terms in f left are negative and thus that f left (z) > 0 when 0 < z < z l .The constraint that z l = 0 if ν ≤ 1 is to ensure that f left is never activated when the original gamma PDF does not have a left tail.
The terms z l and z r , depending on ν and c, control which part of the function is activated.The terms f mid (z r ), z r and log z r in f right , as well as f mid (z l ), z l and log z l in f left ensure that the PDF is continuous.The function f ν,c is integrable for all c, ν > 0. It goes to +∞ as z → 0, when f left exists.This behaviour close to 0 allows to have integrals that are similar to those on the right tails, and that are to be contrasted with those under the original gamma PDF given that the latter function goes to 0 as z → 0 (when it has a left tail).Indeed, an integral from 0 to a small value a can be rewritten as After the change of variables u = 1/z, the mass associated to the left tail can be viewed as an integral from 1/a to ∞, with respect to a function which is similar to f right , but with a different normalizing constant and a different power term.In other words, the behaviour of f left is analogous to that of f right .
Comparisons between gamma PDFs (with mean and shape parameters of 1 and ν, respectively) and f ν,c with c = 1.6 are shown for different values of ν in Figure 2. We observe that both PDFs are globally quite similar, but beyond the threshold at which they start to be defined differently, f ν,c first decreases slightly faster for a short interval (a consequence of the continuity of the function with a constraint of integrating to 1), after which f ν,c goes above the gamma PDF.The length of that interval shortens as ν increases.Note that in Figure 2 (b)-(c), we do not see that f ν,c (z) → ∞ as z → 0 because this explosive behaviour happens too close to 0 to be observed.The exponents λ r and λ l in f ν,c play an important role: they make the function f ν,c a PDF.In particular, ∞ z r f ν,c (z) dz = P(Z ν > z r ), and, when z l > 0, In Figure 3, we highlight, with a log-scale on the y-axis, that λ r and λ l are well defined and can be computed for any ν and c (provided that f left exists in the case of λ l ). Figure 3 also allows to show that λ r and λ l have an interesting asymptotic behaviour as ν → ∞, as indicated by Proposition 1.In the latter, we use Φ to denote the cumulative distribution function of a standard normal distribution.
Proposition 1. Viewed as functions of ν, both λ l and λ r converge, as ν → ∞ for any fixed c, towards . In terms of estimation, the proposed model can be estimated by the maximum likelihood method (the results shown in Figure 1 have been produced using this method).The MLE can be viewed as a robust M-estimator of gamma GLM, with an estimating equation having the same form as that proposed by Cantoni and Ronchetti (2001) (we return to this point in Section 3.2).Estimating the proposed model for a frequentist statistical analysis has thus the same conceptual complexity; the computational complexity is similar.From that perspective of M-estimation of gamma GLM, the estimating equation corresponding to the derivative of the log-likelihood of the proposed model can be modified to include a correction term to ensure Fisher consistency and a weight function can be applied to the vectors x i to decrease the influence of high-leverage points, in the same spirit as the method of Cantoni and Ronchetti (2001).As mentioned, one of the advantages of our approach is that it can also be applied to perform robust Bayesian analyses.Markov chain Monte Carlo methods can be employed to obtain posterior means, medians, credible intervals, and so on.We discuss Bayesian estimation in detail in Section 4.2.
We finish this section with two remarks about the tuning parameter c.Firstly, it plays the same role as the parameter with the same notation c in the method of Cantoni and Ronchetti (2001) presented in Section 2.2: it allows the user to reach a compromise between efficiency and robustness, and the conditions in (7) to determine which part of the function is activated can be rewritten like those in (6).Secondly, there is a correspondence between the value of c and the mass under f ν,c assigned to the part where the density exactly matches the gamma PDF.For example, when c = 1.6 and ν = 36.3(the value used for c and estimated for ν in the real-data example in Section 4.2), the mass of the central part is P(z l ≤ Z ν ≤ z r ) ≈ 0.89.This correspondence can be exploited to guide the choice of c, if one has prior belief about ν.If, for instance, one believes that ν should take values around 40, and one wants 90% of the mass to be assigned to the central part, one should set c to 1.65.In order to recommend an objective and effective choice of value for c in case one does not have prior belief about ν or wants to use an automated approach for selecting a value for the tuning parameter c, we evaluate the estimation performance for several values of c in our simulation study in Section 4.1.We identify that c = 1.6 offers a good balance between efficiency and robustness, at least in the scenarios evaluated.The choice of value for this parameter can also be completely data driven; it can be included as a parameter like β and ν and estimated using the maximum likelihood method in frequentist analyses.A fully Bayesian approach can also be applied where c would be considered as unknown and a random variable like the other parameters.In our numerical experiments, we consider it as fixed and as a tuning parameter to simplify.

Theoretical properties
The theoretical results presented in this section assume that all explanatory variables are continuous to simplify.The first result that we present is crucial for Bayesian analyses.In our Bayesian framework, we consider that the explanatory-variable data points x 1 , . . ., x n are fixed and known, that is not realizations of random variables, contrarily to y 1 , . . ., y n .The posterior distribution is thus conditional on the latter only.To use the proposed model for a Bayesian analysis, we need to select a prior distribution for β and ν, denoted by π( • , • ).Importantly, we have to make sure that the resulting posterior distribution is proper given that any Bayesian analysis assumes this.We will present a proposition providing sufficient conditions.Beforehand, we introduce notation.Let π( • , • | y) be the posterior distribution, where where The assumptions on the prior are weak, which explains why we require n ≥ p, a condition similar to that for frequentist inference.The condition on π( • | ν) is satisfied by any continuous PDF and by Jeffreys prior π( • | ν) ∝ 1.The condition on π( • ) is satisfied if, for instance, the prior on ν is a gamma distribution with any shape and scale parameters.
We now turn to the characterization of the robustness of the proposed approach against outliers.We characterize the robustness in an asymptotic regime where outliers are considered to be further and further away from the bulk of the data.As mentioned in Section 1.3, an outlier is defined as a couple (x i , y i ) whose components are incompatible with the trend in the bulk of the data.We can use r i (β, ν) = √ ν(y i /µ i − 1) to evaluate this incompatibility.It can be extreme because, for a given x i (yielding µ i = exp(x T i β)), the value of y i makes it extreme or because, for a given y i , the value of x i makes it extreme.We mathematically represent such extreme situations by considering an asymptotic scenario where the outliers move away from the bulk of the data along particular paths (see Figure 4).More precisely, we consider that the outliers (x i , y i ) are such that y i → ∞ or y i → 0 with x i being kept fixed (but perhaps extreme).Our results are asymptotic, meaning here that, for the outlying data points with fixed x i (but perhaps extreme), there exist y i values such that the results hold approximately.
We refer to a couple (x i , y i ) with y i → ∞ as a large outlier, and a couple with y i → 0 as a small outlier.The y i component is referred to as a large/small outlying observation.We consider that each outlying observation goes to ∞ or 0 as its own specific rate.More specifically, for a large outlying observation, we consider that y i = b i ω, and that y i = 1/b i ω for a small outlying observation, with b i ≥ 1 a constant, and we let ω → ∞.For each non-outlying observation, we assume that y i = a i , where a i > 0 is a constant.Among the n observations y 1 , . . ., y n , we assume that k of them form a group of non-outlying observations, s of them form a group of small outlying observations, and l of them form a group of large outlying observations.We denote the set of non-outlying observations, small outlying observations, and large outlying observations as y k , y s and y l , respectively.For i = 1, . . ., n, we define the binary functions k i , s i and l i as follows: a small outlying observation, and l i = 1 if it is a large outlying observation.These functions take the value of 0 otherwise.Therefore, we have and n i=1 l i = l.Central to the characterization of the robustness of the proposed approach is the limiting behaviour of the PDF evaluated at an outlying data point.Proposition 3 below is about this limiting behaviour.Proposition 3.For any i with l i = 1, and c, ν and µ i fixed, we have that If ν > 1 and c < √ ν (the condition under which f left exists), the same result holds for any i with s i = 1.
Proposition 3 suggests that the PDF term of an outlier in the likelihood function or the posterior density behaves in the limit like f ν,c (y i ).This is made precise in results below.The term f ν,c (y i ) is independent of β but depends on ν.It is thus treated as a constant in the likelihood function or posterior density when varying β with ν fixed, but not when varying ν.We thus say that the conflicting information (the outlier) is partially rejected.Our approach is thus said to be partially robust.Ideally, conflicting information is wholly rejected as its source becomes increasingly remote (West, 1984).Note that the tail thickness of f ν,c is already extreme (with a density not integrable if we omit the log terms in f left and f right ), and thus, it does not seem possible to remedy the situation by considering a density with heavier tails without exceedingly increasing the complexity of the model.Note also that with a polynomial tail, such as that of the density identified from the estimator of Cantoni and Ronchetti (2001) in Section 2.2, it is not possible to get rid of β in the limiting regime, implying a weaker robustness property.This provides an explanation for the difference in behaviour between the estimators as observed in Figure 1.
A corollary of Proposition 3 is the characterization of the limiting behaviour of the likelihood function.
Corollary 1.The likelihood function n i=1 f ν,c (y i /µ i )/µ i , when evaluated at (β, ν) such that ν > 1 and c < √ ν, asymptotically behave like n i=1 as ω → ∞, implying that, if the MLE belongs to a compact set with ν > 1 and c < √ ν, then it corresponds asymptotically to the mode of (8), provided that the latter belongs to a compact set with ν > 1 and c < √ ν as well.
The function in ( 8) can be seen as the likelihood function based on the non-outliers only that is adjusted by a factor of n i=1 f ν,c (y i ) s i +l i coming from the outliers.To have an idea of the impact of this additional factor, we can return to our data set simulated and discussed in Section 1.2, and consider y n = 15 for the outlying-observation value, the latter being the maximum value for which the parameter estimates are computed in Figure 1.We can compare f ν,c (y n ) (viewed as a function of ν) with the likelihood function based on non-outlying data points (x 1 , y 1 ), . . ., (x n−1 , y n−1 ), that is n−1 i=1 f ν,c (y i /µ i )/µ i (evaluated at β = β).We observe in Figure 5 (left panel) that the function f ν,c (y n ) decreases, but less quickly than n−1 i=1 f ν,c (y i /µ i )/µ i increases (right panel), explaining why the resulting MLE is not so influenced by the outlier.With results such as Proposition 3 and Corollary 1, one may wonder if the influence function of β is redescending, in the sense of being asymptotically null.We now explain that it is the case by writing the estimating equation of β in the same way as that of Cantoni and Ronchetti (2001) (recall (4)) and by showing that the function applied to the Pearson residuals is redescending.A difference is that we allow this function to depend on ν.
Proposition 4. With the proposed model, the estimating equation regarding β is the following: where, if ν > 1 and c < √ ν; otherwise, the third case is omitted in the definition of Ψ ν,c .
For any i with l i = 1, and c, ν and µ i fixed, we have that r i (β, ν) → ∞ as ω → ∞, implying that Ψ ν,c (r i (β, ν)) → 0. For any i with s i = 1, we have that r i (β, ν) → − √ ν as ω → ∞, implying that Ψ ν,c (r i (β, ν)) → 0 as well if ν > 1 and c < √ ν (the condition under which f left exists).Note that the function Ψ ν,c is in general not continuous, which a difference with the functions typically used in the robust frequentist literature.The reason is that, to simplify, we here focus on designing a PDF which is continuous, but not necessarily continuously differentiable.As a consequence, there is not necessarily a solution to the equation in ( 9).
We finish this section with a theoretical result about the asymptotic behaviour of the posterior distribution.We derive the result in a simplifying situation where the parameter ν is considered fixed, like c; the unknown parameter is thus considered to be only β for the rest of the section.The prior and posterior are thus about this parameter only, and they will be denoted by π and π( • | y), respectively.We further simplify by considering that ν is such that ν > 1 and c < √ ν to ensure the existence of both tails in gamma GLM (and of f left in our model), which corresponds to the gamma PDF shape that often is sought for and supported by the data in actuarial science.The simplifying situation can be seen as an approximation of that where ν is considered as unknown and random (as previously), but with a posterior mass that concentrates strongly around a specific value.The result that is derived suggests that the posterior density (when both β and ν are considered unknown) asymptotically behaves like one where the PDF terms of the outlying data points in the original density are each replaced by f ν,c (y i ).
A conclusion of our theoretical result is a convergence of the posterior distribution towards π( • | y k ), which has a density defined as follows: In Theorem 1 below, we use ⌈ • ⌉ to denote the ceiling function.
Theorem 1. Assume that ν is fixed and such that ν > 1 and c < (b) the posterior density converges pointwise: for any In the simplifying situation considered, the conclusions of Theorem 1 about our approach hold once the prior on β is set to be bounded, as long as the number of non-outliers is large enough.A sufficient number is ⌈λ l /λ r ⌉(l + s) + 2p − 1 which is equivalent to having an upper bound on the number of outliers of l + s ≤ (n − 2p + 1)/(1 + ⌈λ l /λ r ⌉), where λ l /λ r varies from about 1 to 4 when c = 1.6, as seen in Figure 3.This condition suggests that the breakdown point, generally defined as the proportion of outliers (l + s)/n that an estimator can handle, is (n In Theorem 1, Result (a) represents the centrepiece; it leads relatively easily to the other results, but its demonstration requires considerable work.Result (a) together with Proposition 3 lead to Result (b), which in turn leads to Result (c) using Scheffé's theorem (Scheffé, 1947).Also, Result (a) together with Proposition 3 suggest that the posterior density with both β and ν unknown asymptotically behaves like one where the PDF terms of the outlying data points in the original density are each replaced by f ν,c (y i ).Indeed, this posterior density essentially corresponds to π(β | y) if we multiply the numerator by π(ν) and integrate the denominator with respect to π(ν).We understood that the reason why we cannot prove a result in the situation where both β and ν are considered unknown (at least using our proof technique) is that we cannot write f ν,c (y i ) as a product of two terms with one depending on ν but not on y i and the other one depending on y i but not ν.
The convergence of the posterior density in Result (b) indicates that the maximum a posteriori probability estimate is robust in the simplifying situation.Result (c) indicates that any estimation based on posterior quantiles (e.g., using posterior medians or Bayesian credible intervals) is robust.It is possible to obtain a result about the convergence of the posterior expectations under additional technical conditions.All these results characterize the limiting behaviour of a variety of Bayes estimators.

Numerical experiments
The results shown in Figure 1 are interesting in that they allow to qualitatively evaluate the quality of the proposed model and the associated estimator.In Section 4.1, we provide a quantitative evaluation through a simulation study in which the parameters are estimated using maximum likelihood method.We next turn to Bayesian estimation in Section 4.2 where we present a detailed case study.

Simulation study
In this simulation study, we evaluate the estimation performance of: gamma GLM, the method of Cantoni and Ronchetti (2001), and the proposed approach.The gamma GLM is estimated using maximum likelihood method, as well as the proposed model.The goal of this simulation study is also to identify good values of c for the proposed model; accordingly, several values are considered: 1.2, 1.3, . . ., 2. Values outside of this range yield non-effective approaches, at least based on our numerical experiments.The estimates based on the method of Cantoni and Ronchetti (2001) are computed using the robustbase R package with the default options (Maechler et al., 2022).Note that, in order to make maximum likelihood estimation of the proposed model and the estimation method of Cantoni and Ronchetti (2001) comparable, we do not include a weight function applied to each x i in the latter.As mentioned, a weight function can be used to decrease the weight of high-leverage points, and such a function can be included in our approach when viewed as an M-estimator.Here, we simplify by using vanilla versions.
Performance will be measured under several scenarios.In each scenario, base data sets are first simulated using gamma distributions based on the same mechanism as for Figure 1.A scenario where performance evaluation is based on such base data sets allows to measure the efficiency of an estimator in the absence of outliers when gamma GLM is the gold standard.In other scenarios, data points of the base data sets are modified to introduce outliers for robustness evaluation.In these scenarios, the location of a data point is shifted as follows: given a location shift of ϑ > 0, we modify r i (β, ν) = √ ν(y i − µ i )/µ i (computed using the true parameter values) by adding ϑ, and obtain ri (β, ν) = r i (β, ν) + ϑ and the shifted data point (x i , ỹi ) with ỹi = ri (β, ν) µ i / √ ν + µ i .In a subset of these scenarios, we also change x i (of the modified data points (x i , ỹi )) to make the data points high-leverage points; we do this by setting xi = (1, 1.5 max j x j2 ) T .The modified data points thus considered in this case are (x i , ỹi ).
The scenarios are now enumerated and described in more detail.
• Scenario 0: simulation of base data sets without modification.
• Scenario 1: simulation of base data sets with modification of 5% of data points chosen uniformly at random using ϑ = 7.
• Scenario 2: simulation of base data sets with modification of 10% of data points chosen uniformly at random using ϑ = 7.
• Scenario 3: simulation of base data sets with modification of 5% of data points chosen uniformly at random using ϑ = 3 and with modification of x i as well.
• Scenario 4: simulation of base data sets with modification of 10% of data points chosen uniformly at random using ϑ = 3 and with modification of x i as well.
The choice of location shifts produces challenging and interesting situations where modified data points are often in a gray area where there is uncertainty regarding whether they really are outliers or not.The location shifts have been chosen analogously as in Gagnon et al. (2020) who study a robust linear regression approach.Regarding the choice of scenarios, Scenarios 1 and 3 can be seen as scenarios with relatively few outliers, and Scenarios 2 and 4 allow to show how performance varies when the number of outliers is doubled.For each scenario, we consider two sample sizes: n = 20 and n = 40; this is to evaluate the impact of doubling the sample size.Note that similar results can be obtained with larger samples if the number of covariates (and thus of parameters) is increased accordingly.
The performance of each model/estimator is evaluated through the premium-versus-protection approach of Anscombe (1960).This approach consists in computing the premium to pay for using a robust alternative R to gamma GLM when there are no outliers (Scenario 0), and the protection provided by this alternative when the data sets are contaminated (other scenarios).The premium and protection associated with a robust alternative are evaluated through the following: where S is the scenario under which the protection is evaluated (1, 2, 3 or 4), and M gamma ( β | S), for instance, denotes a measure M of the estimation error of the (true) regression coefficient using β with gamma GLM, in Scenario S. The scenario is not specified for the premium because it does not vary; it is Scenario 0. The premium and protection for ν have analogous definitions.We do not combine the estimation errors of all parameters, but instead measure the error of β and ν separately to highlight a difference in estimation behaviour.For ν, M is the square root of the mean squared error; for β, it is the square root of the expected (squared) Euclidean norm.The expectations are approximated through the simulation of 10,000 data sets.Note that premiums and protections are only evaluated for robust alternatives to gamma GLM as they are relative measures with respect to gamma GLM.
The results are graphically presented by plotting the couples (Premium(R, β), Protection(R, β | S)) and (Premium(R, ν), Protection(R, ν | S)).The results for Scenarios 1 and 2 are shown in Figure 6, and those for Scenarios 3 and 4 in Figure 7. From this premium-versus-protection perspective, a robust alternative dominates another if its premium is smaller and protection larger.This means that in Figures 6 and 7, we want to pay attention to the points in the upper left corner of the plots.The proposed models associated with the different values of c studied are all excellent, as well as the method of Cantoni and Ronchetti (2001), as they offer better protections than their premiums in most cases.The proposed approach with c = 1.6 essentially dominates that of Cantoni and Ronchetti (2001) in all cases, sometimes significantly, and offers an appealing premium-versus-protection profile.Based on our numerical experiments, we thus recommend using this value, when no prior belief about ν is available for choosing the value of c and when c is not estimated using statistical approaches (recall the discussion at the end of Section 3.1).Note that for a given percentage of outliers (and therefore of non-outliers), a larger sample size translates into enhanced protection for all approaches in all scenarios, which is a consequence of a larger number of non-outliers for estimation (for a fixed number of parameters).

Health care expenditures: A Bayesian case study
In this section, we provide a Bayesian statistical analysis of health-care expenditures.The data set is about 100 patients hospitalized at the Centre Hospitalier Universitaire Vaudois in Lausanne, Switzerland, for medical back problems during 1999.The data set is available in the robmixglm R package (Beath, 2018a).It is known for containing outliers, and has been analysed by Marazzi and Yohai (2004), Cantoni and Ronchetti (2006) and Beath (2018b) to highlight the benefits of using robust statistical methods.The objective of a statistical analysis of that data set is to model the cost of stay in that hospital using six explanatory variables such as the length of stay and the admission type (planned versus emergency).The empirical density of the dependent variable is highly right-skewed.This characteristic of the data, together with the fact the dependent variable is positive, motivate the use of gamma insurance pricing.
For the analysis, we assign a prior distribution on the parameters which is such that: π(β | ν) ∝ 1 and ν has a gamma distribution which is weakly informative and not in contradiction with the likelihood function.We obtain samples from the posterior distributions resulting from gamma GLM and the proposed model using Hamiltonian Monte Carlo (Duane et al., 1987).The posterior estimates are computed using Markov-chain samples of size 1,000,000, after discarding the first 10% of the iterations.See Appendix B for the detailed expressions of the posterior densities and gradients.Note that the posterior density resulting from the proposed model has discontinuous derivatives, which surely has an impact on the performance of the numerical method.However, the discontinuity points have null measure and thus does not prevent the use of such a method.
Posterior medians and 95% highest posterior density (HPD) credible intervals (CIs) are presented in Table 1.They are computed for gamma GLM, based on the whole data set and without identified outliers, and for the proposed model.We observe significant differences between the estimates for gamma GLM based on the whole data set and the two other sets of estimates.The most significant difference in estimates is for ν.Its point estimate for gamma GLM based on the whole data set is about half of that for the proposed model.Smaller estimated values for this parameter translate into larger CI lengths.We observe the impact of an inflated length in particular in the estimation of β 7 where the CI includes 0 in the case of gamma GLM (based on the whole data), while it does not in the case of the proposed model.The former CI suggests that the explanatory variable is not significantly related to the dependent variable, while the latter suggests otherwise.To formally conduct hypothesis testing or variable selection, one can implement a reversible jump algorithm as in Gagnon (2021) or a lifted version (which may be beneficial for variable selection) as in Gagnon and Doucet (2021) or Gagnon and Maire (2023) to sample from the joint posterior distribution of the models induced by the hypothesis testing or variable selection and their parameters.
While there are differences between the estimates for the proposed model and those for gamma GLM based on the data set without identified outliers, the conclusions suggested by the CIs are the same (if we consider that a CI with an endpoint equal to 0.00, to two decimal places, includes the value 0).Those differences should not come as a surprise as the estimation of the proposed model does not correspond to that of gamma GLM based on the data set without identified outliers, but rather to an estimation where erroneous and extreme data points are automatically assigned a weight which decreases with the uncertainty regarding whether they really are outliers.
In regression analyses, residuals are often used to detect outliers.With GLMs, the Pearson residual is computed for each data point.Viewed as a function of the parameters r i (β, ν) that is then estimated in a plug-in fashion in the original definition, it can be estimated in a Bayesian way by the posterior median, for instance, using the Markov-chain samples.The Bayesian estimates based on posterior medians can be found against the posterior medians of µ i (the Bayesian analogue of the fitted values) in Figure 8.We observe that the residuals based on the estimation of the proposed model are overall more dispersed than those based on the estimation of gamma GLM, mainly due to a smaller estimated value of ν in the latter case.Data points are flagged as outliers and investigated if their residuals are extreme.With the residuals based on the estimation of gamma GLM, it is less evident which data points should be flagged as outliers, a consequence of the masking effect.Outlier detection based on the estimation of the proposed model is more effective.

Discussion
In this paper, we highlighted that (non-)robustness against outliers is an aspect to bear in mind when conducting statistical analyses using GLMs.We also highlighted that there are few robust alternatives, especially when one wants to conduct Bayesian statistical analyses.While focusing on gamma GLM, which is an ubiquitous tool in actuarial science, we proposed an effective robust approach that is modelling-based and can thus be used for both frequentist and Bayesian analyses.The proposed model is easy to interpret and understand, and can be straightforwardly estimated (at least on smallto-moderate-size data sets).The resulting MLE (which can be viewed as an M-estimator for gamma GLM) is similar in flavour to the most commonly used robust frequentist estimator of Cantoni and Ronchetti (2001).The theoretical and empirical comparison made in the paper shows that the proposed approach is an appealing alternative.
In future work, it would be interesting to study the computational aspect in depth, that is the scalability with the sample size and with the number of covariates.Also, there may be ways to improve the proposed approach.A weakness that we identified (which the approach of Cantoni and Ronchetti (2001) also has) is that the area where the original gamma PDF is replaced by a robustified function (in the case of the method of Cantoni and Ronchetti (2001), it is instead the area where the derivative of the log-likelihood is replaced) is based on the Pearson residual, in the same spirit as in robust linear regression.But, contrarily to the standardized residual in linear regression, the Pearson residual is not symmetrically distributed.We believe it would be interesting to explore if using a Wilson-Hilferty transformation of the Pearson residual instead would yield a significant improvement (see, e.g., Terrell (2003) for a discussion about Wilson-Hilferty transformations).We expect the resulting approach not to perform significantly better than that proposed here when the shape parameter is moderate to large like in our numerical experiments in Section 4 as the asymmetry of the gamma is not too severe in this case.Finally, it would be interesting to work on model adaptation of GLMs whose response-variable distributions do not have tails, like logistic regression.For such a model, it is not clear to us that there even exists a precise definition of outliers.There are thus interesting fundamental questions to be answered regarding robustness when one wants to use such a GLM.

A Proofs
We now present the proofs of all theoretical results in the same order as the results appeared in the paper.
Proof of Proposition 1.We prove the result for λ r .The result for λ l is proved analogously.We have that .
To prove the result, we will prove that the numerator of the fraction converges towards , and that the denominator converges towards 1 − Φ(c).We start with the analysis of the numerator: We can thus focus on the other term.After simplification, it is equal to .
We have that log This concludes the proof that the numerator in λ r converges towards c √ 2π e c 2 /2 .
We now turn to the proof that the denominator, P[Z ν > z r ], converges towards 1−Φ(c).We have that Z ν follows a gamma distribution whose mean and shape parameters are given by 1 and ν, respectively.This implies that the scale parameter is given by 1/ν.Therefore, where X ν follows a gamma distribution whose shape and scale parameters are ν and 1, respectively.We have that where X 1 , . . ., X ⌊ν⌋ are independent random variables, each having an exponential distribution with a scale parameter of 1, and X follows a gamma distribution whose shape and scale parameters are ν − ⌊ν⌋ and 1, respectively.By the central limit theorem, converges in distribution towards a standard normal distribution.Also, X/ √ ν converges towards 0 with probability 1.Therefore, by Slutsky's theorem, we have that converges in distribution towards a standard normal distribution, which concludes the proof.□ We now present and prove two lemmas that will be used in the proof of Proposition 2.
Proof.Based on (7), We analyse the three parts of the function (of µ) separately.We consider that all three parts exist; otherwise, one part (with f left ) has to be skipped.
If we evaluate the function at the boundaries, we obtain f mid (z r )z r /y, and f mid (z l )z l /y, highlighting that the function is continuous.Now, we show that the function is strictly increasing on [y/z r , y) and then strictly decreasing on (y, y/z l ], which implies that it is unimodal with a mode at µ = y.The derivative of the log of f mid (y/µ)(y/µ)/y with respect to µ is given by The root of this function is µ = y.If µ < y, the derivative is strictly positive, meaning that the function f mid (y/µ)(y/µ)/y is strictly increasing on that part of the domain.If µ > y, the derivative is strictly negative, meaning that the function f mid (y/µ)(y/µ)/y is strictly decreasing on that part of the domain.This allows to conclude that the function (of µ) f ν,c (y/µ)/µ is strictly increasing on [y/z r , y) and then strictly decreasing on (y, y/z l ].Thus, f mid (1)/y is the maximum of f ν,c (y/µ)/µ.Therefore, the upper bound of f ν,c (y/µ)/µ is given by f ν,c (1)/y = (e −1 ν) ν /(yΓ(ν)).
Proof.We will separate the integral into two parts: from 0 to a large positive constant ν * , and from ν * to ∞.The function (e −1 ν) ν /Γ(ν) is strictly increasing, thus this function is bounded on 0 < ν ≤ ν * by (e −1 ν * ) ν * /Γ(ν * ).As ν gets large, Γ(ν) can be approximated by Stirling's formula, given by We thus have (e −1 ν) ν /Γ(ν) ≈ (e −1 ν) ν /S (ν).More precisely, Therefore, for all δ > 0, we can find a ν * such that for all ν ≥ ν * , The last step is due to the condition ∞ 0 π(ν) ν (n−p)/2 dν < ∞, and because π( • ) is a proper PDF. □ Proof of Proposition 2. To prove this proposition, it to show that the marginal m(y) is finite, that is To prove this, we first split the data points into two parts.The first part contains p data points, which will be used to perform a change of variables from β to z i = y i /(x T i β) for i = 1, . . ., p. Without loss of generality, we choose the first p data points.For the rest of the n − p data points, we bound n−p i=1 f ν,c (y i /µ i )/µ i by a function depending on ν and then we show that this bound, multiplied by π(ν), is integrable with respect to ν.We thus use the condition that n ≥ p.When n = p, the proof is seen to be more simple, because the part with the rest of the n − p data points does not actually exist.We have In step a, we split the data points into two parts as we explained previously, and we use that π(β | ν) ≤ B with B a positive constant.We also bound the product of f (y i /µ i )/µ i for i = p + 1, . . ., n by using Lemma 1.In step b, we perform a change of variables from β to z i = y i / exp(x T i β), for i = 1, . . ., p.For each i, we have ∂z i /∂β = y i x T i / exp(x T i β).The determinant is non-null because all explanatory variables are continuous.Indeed, consider the case p = 2 for instance; the determinant is different from 0 provided that x 12 x 22 , which happens with probability 1.When any type of explanatory variables is considered, we need to be able to select p observations, say those with x i 1 , . . ., x i p , such that the matrix with rows x T i 1 , . . ., x T i p has a non-null determinant.In step c, we used that f ν,c is a PDF.In step d, we used Lemma 2. □ Proof of Proposition 3. We first consider the case where i is such that l i = 1, and in order to prove the result we show that lim In the denominator f ν,c (y), f right is activated, and we have , where z r , f mid (z r ), and λ r depend only on ν and c.As y/µ → ∞, f right is also activated in f ν,c (y/µ)/µ.Thus, We consider now that y → 0, under the condition that c < √ ν and ν > 1.Under this condition, f left exists and is activated in both f ν,c (y) and f ν,c (y/µ)/µ.We have

□
Proof of Proposition 4. Let us consider that ν > 1 and c < √ ν; otherwise, the third case in the partial derivative below is omitted.We have that This can be rewritten as The proof is concluded by recalling that r i (β, ν) = √ ν(y i /µ i − 1).□ Proof of Theorem 1.We start with the proof of Result (a), which is quite lengthy.We next turn to the proofs of Results (b) and (c) which are shorter.
Let us assume for now that m(y) < ∞ for all ω, and m(y k ) < ∞.This is proved below.We first observe that We show that the last integral converges to 1 as ω → ∞.Assuming that we can interchange the limit and the integral, we obtain that In step a, we use Proposition 3. In step b, we use that π(β | y k ) is proper.Indeed, we notice in the proof of Proposition 2 that, if ν is fixed, the posterior distribution (of β) is proper if the prior is bounded and if k ≥ p.These conditions are satisfied because we assume that π is bounded, and that k ≥ ⌈λ l /λ r ⌉(l + s) + 2p − 1 ≥ p.Note that this implies that m(y k ) < ∞ and m(y) < ∞ for all ω.
To prove that we can interchange the limit and the integral, we use Lebesgue's dominated convergence theorem.We thus need to prove that the integrand is bounded by an integrable function of β that does not depend on ω.Therefore, we need to show that with g an integrable function and h a bounded function, where we used that π(β) ≤ B for all β with B a positive constant.The functions g and h are defined below.Under the assumptions of Theorem 1, we know that there are at least ⌈λ l /λ r ⌉(l + s) + 2p − 1 nonoutliers in the data set.Without loss of generality, assume that the first ⌈λ l /λ r ⌉(l + s) + 2p − 1 points are non-outliers, that is k 1 , . . ., k ⌈λ l /λ r ⌉(l+s)+2p−1 = 1.
Step 1.We first choose p points among the non-outliers.Without loss of generality, we choose (x 1 , y 1 ), . . ., (x p , y p ).We show that g defined as g(β) := B p i=1 f ν,c (y i /µ i )/µ i is integrable.We have, similarly as in the proof of Proposition 2, where we use the change of variables z i = y i / exp(x T i β), for i = 1, . . ., p.The determinant term is different from 0 because x 1 , . . ., x p are linearly independent (because the covariates are continuous).Given that these p observations are non-outlying, p i=1 1 y i is bounded and independent of ω.
Step 2. We show that the rest of the product, i.e.
is bounded, and that the bound does not depend on neither β nor ω.
In order to show this, we split the domain of β.Doing so will allow for technical arguments yielding bounds that do not depend on neither β nor ω.Before presenting the precise split of the domain, we provide intuition of why it is useful to proceed like that.Consider i with l i = 1.The main difficulty in bounding f ν,c (y i /µ i )/µ i f ν,c (y i ) essentially resides in dealing with the term f ν,c (y i ) in the denominator because it is small.The case i with l i = 1 is the easiest to provide intuition.The case i with s i = 1 is analogous but it is more difficult to provide intuition.The goal is thus essentially to get rid of f ν,c (y i ).When √ ω is large and, for any fixed c and ν, a corollary of Proposition 3 can be used to bound f ν,c (y i /µ i )/µ i f ν,c (y i ) .
When x T i β > log(ω)/2, we are not guaranteed that y i /µ i is large and thus cannot use the PDF term of the outlier to bound 1/ f ν,c (y i ).We thus have to resort to non-outliers.With non-outliers, we consider y j as fixed, and it is only when 1/ exp(x T j β) is large that the PDF term f ν,c (y j /µ j )/µ j can be used to bound 1/ f ν,c (y i ).The strategy used below to deal with the situation is to divide the parameter space in mutually exclusive areas for which we know exactly in which case we are: either we can use the outlier PDF term to bound 1/ f ν,c (y i ) or not; in the latter case, we know that we have sufficiently non-outliers that can be used to bound all terms 1/ f ν,c (y i ).To have a precise control over the number of non-outliers that can be used, we prove that when we cannot use PDF terms of outliers to bound 1/ f ν,c (y i ), we have a maximum of p−1 non-outliers that cannot be used to that job either.Using that k ≥ ⌈λ l /λ r ⌉(l+ s)+2p−1, we know that at least ⌈λ l /λ r ⌉(l + s) non-outlying points can be used to bound the terms 1/ f ν,c (y i ), which will be shown to be sufficient (recall that p non-outlying points have already been used to obtain a integrable function).
Let us now continue with the formal proof and present how we split the domain of β: where with I L , I S , and I F defined as follows: . ., n} and l i = 1}, I S := {i : i ∈ {⌈λ l /λ r ⌉(l + s) + 2p, . . ., n} and s i = 1}, γ being a positive constant that will be defined.Remember that the first p points, which are non-outliers, have already been used for the purpose of integration in Step 1.Thus, the index of each remaining non-outliers is greater than or equal to p + 1.
The set O i represents the hyperplanes x T i β characterized by the different values of β satisfying log(b i ω) − x T i β < log(ω)/2 for i ∈ I L , and log(b i /ω) − x T i β < log(ω)/2 for i ∈ I S .The points (x i , log(b i ω)) and (x i , log(b i /ω)) can be seen as log transformations of large outliers and of small outliers, respectively, given that ω → ∞.

Now we claim that
. ., i p with i j i s , ∀i j , i s such that j s.To prove this, we use the fact that x i (a vector of dimension p) can be expressed as a linear combination of x i 1 , . . ., x i p .This is true because all explanatory variables are continuous, therefore the space spanned by the vectors x i 1 , . . ., x i p has dimension p.As a result, if β ∈ F i 1 ∩ . . .∩ F i p and x i = p s=1 a s x i s , for some a 1 , . . ., a p ∈ R, and In Step a, we use that b i ≥ 1 and we simplify the form of the linear combination.In Step b, because β ∈ F i 1 ∩. ..∩F i p , we have x T i β < log(ω)/γ for all i ∈ {i 1 , . . ., i p }. Thus, − p s=1 a s x T i s β > −(log(ω)/γ) p s=1 a s .In Step c, we define the constant γ such that γ ≥ 2 p s=1 a s (we define γ such that it satisfies this inequality for any combination of i and i 1 , . . .i p ; without loss of generality, we consider that γ ≥ 1).
The proof is analogous in the case where i ∈ I S .In Step d, we use the fact that x T i β > − log(ω)/γ, thus − p s=1 a s x T i s β < log(ω)/γ.In Step e, we use that γ is such that γ ≥ 2 p s=1 a s .Therefore, we have that if β ∈ F i 1 ∩ . . .∩ F i p , then β O i .This proves that O i ∩ F i 1 ∩ . . .∩ F i p = ∅ for all i, i 1 , . . ., i p with i j i s , ∀i j , i s such that j s.This result in turn implies that the domain of β can be written as ) for i 1 ∈ I F , and so on.We find an upper bound on each of these subsets.Because there is a finite number of subsets, we will be able to bound h by the maximal bound.Recall that where A, B and C represent the product on the left, the product in the middle and the product on the right, respectively.2 + 2 log(b i ) log(ω) We are thus sure that b i ω/µ i and b i ω are both on the right tail of f ν,c , that is f right .In Step b, we use that log(b i ω) − x T i β ≥ log(ω)/2.In Step c, we have 2 log(b i )/ log(ω) ≤ 1 for large enough ω.In Step d, for any fixed ν, 3 lλ r is finite given that λ r , which depends only on c and ν, is finite.
For B, we have The proof is analogous.
We are thus sure that (1/b i ω)/µ i and 1/b i ω are both on the left tail of f ν,c , that is f left .In Step b, we use that log(y i ) − x T i β ≤ − log(ω)/2.In Step c, 3 sλ l is finite given that λ l , depending only on c and ν, is finite.
For C, we have In Step a, according to Lemma 1, f ν,c (y/µ)/µ is upper bounded by (e −1 ν) ν /(yΓ(ν)) for any value of µ, when y, ν and c are considered fixed.We conclude that in this situation, A × B × C is bounded.Situation 2. Consider now that β belongs to one of the p−1 i=1 ⌈λ l /λ r ⌉(l+s)+p−1 i mutually exclusive sets ) for i 1 ∈ I F , etc.We analyse A, B and C separately.We have In Step a, we can deduce from Lemma 1 that, viewed as a function of µ, (y/µ) f ν,c (y/µ) is bounded by (e −1 ν) ν /Γ(ν), for all ν, c, and y.Notice that the only part that depends on ω in (10) is a product of terms log(b i ω) λ r , one for each large outlier.Analogously, for B, we have In Step a, we change the sign of log(1/b i ω) because log(z l ) < 0. In Step b, we bound (y/µ) f ν,c (y/µ) by (e −1 ν) ν /Γ(ν).Notice that the only part that depends on ω in ( 11) is a product of terms log(b i ω) λ l , one for each small outlier.We now turn to C. We have shown previously that in any of the sets to which β can belong, there are at most p − 1 non-outlying points such that |x T i β| < log(ω)/γ.The case where that upper bound is attained is that where . Without loss of generality, suppose that all non-outlying points such that |x T i β| < log(ω)/γ have index i belonging to {p + 1, ..., 2p − 1}.
In the situation where y i /µ i is on right tail, that is y i /µ i ≥ a i ω 1/γ , we have In Step a, we have that µ i ≤ ω −1/γ given that y i /µ i ≥ a i ω 1/γ , thus − log(µ i ) ≥ log(ω)/γ.In Step b, we use the fact that log(a i ) ≥ − log(ω)/(2γ) for large enough ω.We thus have log(a i )+log(ω)/γ ≥ log(ω)/(2γ).In the situation where y i /µ i is on the left tail, that is y i /µ i ≤ a i /ω 1/γ , we have In Step a, we change the sign because log(z l ) < 0. In Step b, we use the fact that log(a i ) ≤ log(ω)/(2γ) for large enough ω.
The reason we consider these two cases is that we want to use densities of "extreme non-outliers", that is f ν,c (y j /µ j )/µ j with j such that β ∈ F c j , to cancel each log(b i ω) at some power for i ∈ I L ∪ I S that appears in the bounds of A and B (recall (10) and ( 11)).As explained, there are at least ⌈λ l /λ r ⌉(l + s) extreme non-outliers that can be used.However, the major problem here is that we do not know how many of those ⌈λ l /λ r ⌉(l + s) extreme non-outliers are such that y j /µ j is on the right tail, and how many are such that y j /µ j is on the left tail, which depends on the value of β.We thus have to consider all possible scenarios, including the worst-case scenario.We now present clearly how we bound each log(b i ω) at some power for i ∈ I R ∪ I S by using the densities of extreme non-outliers, in all scenarios.
Let us first consider that y i is a large outlying observation, that is i ∈ I L .We take ⌈λ l /λ r ⌉ nonoutliers among the ⌈λ l /λ r ⌉(l + s) extreme non-outliers that are thus such that β ∈ F c j for all of these.In other words, all these points are such that |x T j β| ≥ log(ω)/γ.There are two possible cases.Case 1.There is at least one point among the ⌈λ l /λ r ⌉ points such that y j /µ j ≥ a j ω 1/γ , implying that the density is evaluated on the right tail.In this case, we have In Step a, we take one point such that y j /µ j ≥ a j ω 1/γ .We bound f ν,c (y j /µ j )/µ j by the bound presented in ( 12), and we bound the rest of the points by 1 using the bounds in ( 12) and ( 13), given that 2γ/ log(ω) ≤ 1 for large enough ω.In Step b, we have that log(b i ω)/ log(ω) = (log(b i ) + log(ω))/ log(ω) ≤ 2, as log(b i )/ log(ω) ≤ 1 for large enough ω.In Step c, every term is a well-defined constant, the result is thus finite.
We showed that we can use the product of the densities of ⌈λ l /λ r ⌉ extreme non-outliers to offset log(b i ω) λ r for i ∈ I L , so that the product is bounded.The approach is analogous for small outliers, that is for i ∈ I S .A difference is that, in Case 1, we instead consider that there is at least one point among the ⌈λ l /λ r ⌉ points such that y i /µ i ≤ a i /ω 1/γ , implying that the density is evaluated on the left tail.Also, in Case 2, we instead consider that no point among the ⌈λ l /λ r ⌉ points is such that y j /µ j ≤ a j ω 1/γ , implying that the density of every point is evaluated on the right tail.In this case, we have a product 2γ log(b i ω) log(ω) ⌈λ l /λ r ⌉λ r −λ l that appears and we bound the right term by 1 as above using that ⌈λ l /λ r ⌉λ r − λ l ≥ (λ l /λ r )λ r − λ l = 0. We therefore know that we can offset log(b i ω) at some power for i ∈ I L ∪ I S using the product of the densities of ⌈λ l /λ r ⌉ extreme non-outliers, in all scenarios.
Therefore, if we multiply now A, B and C, we obtain that In Step a, we bound A and B by expressions that are previously shown (see ( 10) and ( 11)).In Step b, we bound f ν,c (y i /µ i )/µ i , for i = p + 1, . . ., 2p − 1, by (e −1 ν) ν /(y i Γ(ν)) (see Lemma 1).Recall that these are non-outliers such that |x T i β| < log(ω)/γ.In Step c, we simplify each log(b i ω) λ r and log(b i ω) λ l by multiplying each of these terms by the product of densities of ⌈λ l /λ r ⌉ extreme non-outliers and by bounding the resulting product by (4γ) λ r or (4γ) λ l , as we have explained earlier.In Step d, we bound the rest of the non-outlier terms f ν,c (y i /µ i )/µ i by (e −1 ν) ν /(y i Γ(ν)).If we consider all the k non-outliers, p non-outliers were used for the change of variables and to integrate over β at the beginning, we bounded p − 1 terms of non-extreme non-outliers, and ⌈λ l /λ r ⌉(l + s) were used to offset the outliers.After Steps (a)-(c), there are thus possibly still k − p − (p − 1) − ⌈λ l /λ r ⌉(l + s) = k − 2p − ⌈λ l /λ r ⌉(l + s) + 1 non-outliers left, that need to be considered, which is what was done in Step d.The condition of this theorem k ≥ ⌈λ l /λ r ⌉(l + s) + 2p − 1 is to make sure that we have enough non-outlying points to bound the whole product.The proof is simpler and still valid if there is no outlier left after Steps (a)-(c); Step (d) is simply skipped.In Step e, every y i in the expression is a non-outlying observation, and is thus equal to a i .Finally, in Step f , the whole expression is finite given that all terms are constants.
Therefore, h(β, ω) = A × B × C is bounded.This completes the proof of Result (a).We now turn to the proof of Result (b).We have that as ω → ∞, for any β ∈ R p , using Result (a) and Proposition 3. We also showed that π(β | y k ) is proper.This concludes the proof of Result (b).
We finish with the proof of Result (c).This result is a direct consequence of Result (b) using Scheffé's theorem (Scheffé, 1947).□ B Supplementary material for Section 4.2 With gamma GLM, the posterior density π( The posterior estimates are computed using Hamiltonian Monte Carlo.We apply a change of variables η := log ν so that the resulting posterior distribution has R p+1 for support.The resulting posterior density is such that recall the prior on the parameters.The log density is such that (if we omit the normalization constant) where α > 0 and θ > 0 are the shape and scale parameters of the prior distribution.Also, where Γ ′ is the derivative of Γ.
With the proposed robust GLM, the posterior density π( • , • | y) (with the same prior as before) is such that We also use Hamiltonian Monte Carlo to compute the posterior estimates.
The posterior density resulting from the change of variables η := log ν is such that The log density is such that (if we omit the normalization constant) We have that where λ ′ r := ∂ ∂η λ r and λ ′ l := ∂ ∂η λ l which are evaluated numerically.A difference between the model defined here and that in Section 3.1 is that z l is defined differently; here, it is equal to 1− µ i ν c as soon as this is positive.This is because the inverse Gaussian PDF always has a left tail, in the sense that f µ,ν (y) → 0 as y → 0, regardless of the value of µ and ν.
Comparisons between inverse Gaussian PDFs (with mean and shape parameters of 1 and ν/µ i , respectively) and f µ i ,ν,c with c = 1.6 are shown for different values of ν/µ i in Figure 10.The observations are the same as in Section 3.1: both PDFs are globally quite similar, but beyond the threshold at which they start to be defined differently, f µ i ,ν,c first decreases slightly faster for a short interval (a consequence of the continuity of the function with a constraint of integrating to 1), after which f µ i ,ν,c goes above the inverse Gaussian PDF.We finish this section by presenting results of a numerical experiment similar to that conducted for Figure 1.The simulation setting is the same, with the obvious difference that the original data set is simulated using a inverse Gaussian distribution.Next, we gradually increased the value of y n from 5 (a non-outlying value) to 15 (a clearly outlying value).For each data set associated with a different value of y n , we estimated the parameters ν, β 1 and β 2 of inverse Gaussian GLM and the proposed model based on maximum likelihood method.As expected, we observe a robustness problem with the estimation of inverse Gaussian GLM, a problem which is addressed with the proposed model.In Section C.2, we present a theoretical result which allows to have a characterization of the robustness of the proposed model.Note that a robust estimation of inverse Gaussian GLM is not available through the robustbase R package.

C.2 Theoretical properties
In this section, we first consider a Bayesian framework and provide conditions under which the posterior distribution is proper (result analogous to Proposition 2).We next provide a result about the limiting behaviour of the proposed PDF evaluated at an outlying data point (result analogous to Proposition 3).
Let us start by setting our Bayesian framework.The framework is as in Section 3.2.We consider that the explanatory-variable data points x 1 , . . ., x n are fixed and known, that is not realizations of random variables, contrarily to y 1 , . . ., y n .The posterior distribution is thus conditional on the latter only.The prior distribution is denoted by π( • , • ).Let π( • , • | y) be the posterior distribution, where y := (y 1 , . . ., y n ) T .It is such that where (1 + e −x T i β/2 ) dν dβ < ∞.
Then, the posterior distribution is proper.
The assumption in Proposition 6 is satisfied when, for instance: i) β and ν are a priori independent, ii) the distribution of ν is a gamma with any shape and scale parameters, iii) the distribution of β is a normal.When the prior distribution of β is a normal, (1 + e −x T i β/2 ) π(β) dβ corresponds to a sum of moment generating functions of univariate normal distributions, which is finite.We now provide a characterization of the robustness of the proposed model by considering the same asymptotic framework as in Section 3.2.We provide in Proposition 7 a result analogous to Proposition 3. Proposition 7.For any i with l i = 1, and c, ν and µ i fixed, we have that lim ω→∞ f µ i ,ν,c (y i /µ i )/µ i f µ i ,ν,c (y i ) = 1.
If ν/µ i > c (the condition under which f left exists), the same result holds for any i with s i = 1.Consequently, the likelihood function n i=1 f µ i ,ν,c (y i /µ i )/µ i , when evaluated at (β, ν) such that ν/µ i > c for all i with s i = 1, asymptotically behave like n i=1 as ω → ∞, implying that, if the MLE belongs to a compact set such that ν/µ i > c for all i with s i = 1, then it corresponds asymptotically to the mode of (14), provided that the latter belongs to a compact set such that ν/µ i > c for all i with s i = 1 as well.
The difference with Proposition 3 is that limiting PDF depends on β as well as ν.This implies that we cannot state that the proposed model is partially robust, even though we empirically observe (as in Figure 11) a certain degree of robustness.The asymptotic behaviour of f µ i ,ν,c (y i /µ i )/µ i also indicates that we cannot prove a result like Theorem 1 for the model proposed here (at least using the proof technique employed for Theorem 1).This is because we cannot write f µ i ,ν,c (y i ) as a product of two terms: one depending on (β, ν) but not on y i , and the other one depending on y i but not on (β, ν).
To push further the analysis for a characterization of the robustness, we can analyse the behaviour of f µ i ,ν,c (y i ) when ν/µ i is large.This allows to understand the asymptotic behaviour of f µ i ,ν,c (y i /µ i )/µ i as ω → ∞, that is the limiting behaviour of the proposed PDF evaluated at an outlying data point, when ν/µ i is large.Let us look at the case l i = 1 (the case s i = 1 can be analysed analogously): Therefore, when ν/µ i is large, f µ i ,ν,c (y i ) behaves like c √ 2π e c 2 /2 1 y i (log z r ) λ r (log y i ) λ r +1 , which depends on the parameters β and ν only through which is a (slowly) decreasing function of ν/µ i .The fact the variation is slow explains, in our opinion, empirical results such as those shown in Figure 11.Note that we can obtain an analogous result about the behaviour of f ν,c (y i ) analysed in Section 3.2; the result is consistent with what is observed in Figure 5.

C.3 Proofs
In this section, we first present the proof of Proposition 5. Next, we present a lemma which is useful for the proof of Proposition 6, and next we present the proof of this latter result.The proof of Proposition 7 is analogous to that of Proposition 3.
Proof of Proposition 5. We prove the result for λ r .The result for λ l is proved analogously.We have that λ r = 1 + f mid (z r ) log(z r ) z r , where ϕ := ν/µ i .We omit the dependence in i as it is not important.We study the limit of λ r as ϕ → ∞, for fixed c.The proof is similar as that of Proposition 1.
To prove the result, we will prove that the numerator of the fraction converges towards c √ 2π e c 2 /2 , and that the denominator converges towards 1 − Φ(c).We start with the analysis of the numerator: using that 1 + c/ √ ϕ → 1 and that (1 + c/ √ ϕ) √ ϕ → e c .This concludes the proof that the numerator in λ r converges towards c √ 2π e c 2 /2 .
We now turn to the proof that the denominator, P[Z ϕ > z r ], converges towards 1 − Φ(c).We have that where X ϕ,ϕ 2 follows an inverse Gaussian distribution whose mean and shape parameters are ϕ and ϕ 2 , respectively.
We have the following equality in distribution: where X 1 , . . ., X ⌊ϕ⌋ are independent random variables, each having an inverse Gaussian distribution with a mean and shape of 1, and X follows an inverse Gaussian distribution whose mean and shape parameters are ν − ⌊ν⌋ and (ν − ⌊ν⌋) 2 , respectively.By the central limit theorem, ⌊ϕ⌋ converges in distribution towards a standard normal distribution.Also, X/ √ ϕ converges towards 0 with probability 1.Therefore, by Slutsky's theorem, we have that X ϕ,ϕ 2 − ϕ √ ϕ converges in distribution towards a standard normal distribution, which concludes the proof.□ Lemma 3. Viewed as a function of β and ν, (1/µ i ) f µ i ,ν,c (y i /µ i ) is bounded by B √ ν(1 + e −x T i β/2 ), where B > 0 is a constant (with respect to β and ν).
Proof.We have that We bound the function in the three cases.
Firstly, we choose B such that using that exp(−x) ≤ 1 for all x ≥ 0. Secondly, using that y i /µ i > z r , Finally, let us consider that z l = 1 − µ i ν c, which is the situation where the third case in (15) can be activated.Using that (y i /µ i ) −1 > z −1 r , because B can be chosen such that 1 given that 1 √ z l exp − c 2 2 1 z l is bounded for any 0 < z l < 1. □ Proof of Proposition 6.Using Lemma 3, (1 + e −x T i β/2 ) dν dβ < ∞, by assumption.□ Figure 1.Estimates of ν, β 1 and β 2 as a function of y n based on the method of Cantoni and Ronchetti (2001),

Figure 2 .
Figure 2. Comparisons between gamma PDFs and f ν,c with c = 1.6, for different values of ν.

Figure 3 .
Figure 3. λ r and λ l as a function of ν when c = 1.6; the scale on the y-axis is logarithmic; the black horizontal line represents the asymptotic value as ν → ∞.
a situation where the posterior distribution is proper and thus well defined.We use π( • | ν) to denote the conditional prior density of β given ν and π( • ) to denote the marginal prior density of ν.Proposition 2. Assume that π(β | ν) ≤ B, for any β and ν, B being a positive constant.Assume that n

Figure 4 .
Figure 4. Couples formed of data points of the dependent variable (cost of stay) and of an explanatory variable (log of length of stay) in the real-data example of Section 4.2; the black line represents an estimated exponential trend.
then both π( • | y) and π( • | y k ) are proper and as ω → ∞ (with y i = b i ω or y i = 1/b i ω for outlying observations), (a) the asymptotic behaviour of the marginal distribution is: m

Figure 6 .Figure 7 .
Figure6.Premiums versus protections in Scenarios 1 and 2, with lines premium = protection to identify the robust alternatives that offer better protections than their premiums.

Figure 8 .
Figure 8. Bayesian Pearson residuals against Bayesian fitted values under gamma GLM and the proposed model.

Figure 9 .
Figure9.λ r and λ l as a function of ν/µ i when c = 1.6; the scale on the y-axis is logarithmic; the black horizontal line represents the asymptotic value as ν/µ i → ∞.

Figure 10 .
Figure 10.Comparisons between inverse Gaussian PDFs and f µ i ,ν,c with c = 1.6, for different values of ν/µ i .

Figure 11 .
Figure 11.Estimates of ν, β 1 and β 2 as a function of y n based on the estimation of inverse Gaussian GLM and the proposed method (with c = 1.6); the black horizontal lines represent the maximum likelihood estimates of inverse Gaussian GLM based on the data set excluding the outlier.
a situation where the posterior distribution is proper and thus well defined.Proposition 6. Assume that π( • , • ) is a proper PDF such that ν,c (y i ) = f mid (z r ) r ) λ r −1 (log y i ) λ r .With the proof of Proposition 5, we know thatf mid (z r ) z r log z r behaves like c √ 2π e c 2 /2, when ν/µ i is large and that λ r − 1 behaves like c e −c 2 /2 √ 2π (1 − Φ(c)) =: λ r .

Table 1 .
Posterior medians and 95% HPD CIs under gamma GLM, based on the whole data set and without identified outliers, and under the proposed model.
GLM. Analysing such a data set is of interest for actuaries.It can help them understand the main contributing factors, in this case, to the health cost, in order to provide a basis for accurate