A robust Bayesian approach to modelling epistemic uncertainty in common-cause failure models

In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model.


Introduction
Common-cause failure has been recognized since the time of the Reactor Safety Study [6] as a dominant contributor to the unreliability of redundant systems.A number of models have been developed for common-cause failure over the time since the publication of the Reactor Safety Study, with perhaps the most widely used one being the Basic Parameter Model, at least in the U.S. [9].
The alpha-factor parametrisation of this model uses a multinomial distribution as its aleatory model for observed failures [9].The conjugate prior to the multinomial model is the Dirichlet distribution.In the standard Bayesian approach, the analyst specifies the parameters of a precise Dirichlet distribution to model epistemic uncertainty in the alpha-factors, which are the parameters of the multinomial aleatory model.This Dirichlet prior is then updated with observed data to obtain a precise posterior distribution, also Dirichlet.
In this paper, we follow [11], and adapt the imprecise Dirichlet model of Walley [13] to represent epistemic uncertainty in the alpha-factors.In this approach the analyst specifies lower or upper expectations (or both) for each alpha-factor, along with a learning parameter, which determines how quickly the prior distribution learns from observed data.We find that values in the range of 1 to 10 seem reasonable for this application.
Following [11], the approach is compared with that of Kelly and Atwood [8], which attempted to find a precise Dirichlet prior that was minimally informative [2], in the sense that it incorporated specified mean values for the alpha-factors, but which was otherwise quite diffuse.The numerical example from [8] is addressed in the imprecise Dirichlet framework, which can be seen as an extension of the approach of [8] to the case where a precise mean for each alpha-factor cannot be specified.
Finally, we address the problem-not discussed in [11]-of inference about actual failure rates.These failure rates are rational functions of the alpha-factors and the marginal failure rate per component.Modelling failures as a Poisson process, we take a Gamma distribution as conjugate prior for the marginal failure rate.Similar to the procedure for the alpha-factors, we can model epistemic uncertainty on the marginal failure rate by considering lower and upper expected prior failure rates, along with a learning parameter that determines how quickly the prior distribution learns from observed data.
By combining our epistemic uncertainty models for both the alpha-factors and the marginal failure rate, we are able to perform a global sensitivity analysis on the common-cause failure rates.We provide an algorithm that calculates, up to reasonable precision, bounds on these failure rates.The resulting novel procedure is demonstrated on a simple electrical network reliability problem.
The paper is organized as follows.Section 2 reviews the basic parameter model and its reparametrisation as the alpha-factor model.Section 3 explores how the parameters of the alpha-factor model can be estimated, using Dirichlet and Gamma priors.Section 4 discusses the handling of epistemic uncertainty for the alpha-factors.Two ways to choose a Dirichlet prior (or sets of Dirichlet priors) starting from epistemic prior expectations of the alpha-factors are considered.Throughout, the main ideas are demonstrated on a numerical example.Section 5 shows how, similarly to the alpha-factor case, epistemic uncertainty can be expressed for the marginal failure rate.A set of conjugate Gamma priors is elicited by considering lower and upper expected prior marginal failure rates.Section 6 describes an algorithm that infers bounds on all common-cause failure rates based on our imprecise alpha-factor model and our imprecise marginal failure rate model.Section 7 demonstrates our methodology on a simple electrical network reliability problem.Section 8 ends the paper with some conclusions and thoughts for further research.

Common-Cause Failure Modelling
2.1.The Basic Parameter Model.Consider a system that consists of k components.Throughout, we make the following standard assumptions: (i) repair is immediate, and (ii) failures follow a Poisson process.
For simplicity, we assume that all k components are exchangeable, in the sense that they have identical failure rates.More precisely, we assume that all events involving exactly j components failing have the same failure rate, which we denote by q j .This model is called the basic parameter model, and we write q for (q 1 , . . ., q k ).
For example, if we have three components, A, B, and C, then the rate at which we see only A failing is equal to the rate at which we see only B failing, and is also equal to the rate at which we see only C failing; this failure rate is q 1 .Moreover, the rate at which we observe only A and B jointly failing is equal to the rate at which we observe only B and C jointly failing, and also equal to the rate at which we observe only A and C jointly failing; this failure rate is q 2 .The rate at which we see all three components jointly failing is q 3 .2.2.The Alpha-Factor Model.The alpha-factor parametrisation of the basic parameter model [9] starts out with considering the total failure rate of a component q t , which could involve failure of any number of components, that is, this is the rate obtained by looking at just one component, ignoring everything else.Clearly, (2) For example, again consider a three component system, A, B, and C. The rate at which A fails is then the rate at which only A fails (q 1 ), plus the rate at which A and B, or A and C fail (2q 2 ), plus the rate at which all three components fail (q 3 ).Next, the alpha-factor model introduces α j -the so-called alpha-factor-which denotes the probability of exactly j of the k components failing given that failure occurs; in terms of relative frequency, α j is the fraction of failures that involve exactly j failed components.We write α for (α 1 , . . ., α k ).Clearly, (3) For example, again consider A, B, and C. Then the rate at which exactly one component fails is 3q 1 (as we have three single components, each of which failing with rate q 1 ), the rate at which exactly two components fail is 3q 2 (as we have three combinations of two components, each combination failing with rate q 2 ), and the rate at which all components fail is q 3 .Translating these rates into fractions, we arrive precisely at Eq. (3).
It can be shown that [9, Table C-1, p. C-5]: Eqs. ( 2), (3), and (4) establish a one-to-one link between the so-called basic parameter model (q) and the alpha-factor model (q t , α).The benefit of the alpha-factor model over the basic parameter model lies in its distinction between the total failure rate of a component q t , for which we generally have a lot of information, and common-cause failures modelled by α, for which we generally have very little information.
One of the goals of this paper is to perform a sensitivity analysis, in the sense of robust Bayes [3,4,12], over α, and to measure its effects on q j .Because the q j are proportional to q t , in fact, it turns out to take only very little additional effort to perform a sensitivity analysis over α and q t jointly.So, although in many cases of practical interest, we will know q t quite well, interestingly, we do not need to assume that we know much at all about q t .3. Parameter Estimation 3.1.Dirichlet Prior for Alpha-Factors.Suppose that we have observed a sequence of N failure events, where we have counted the number of components involved with each failure event, say n j of the N observed failure events involved exactly j failed components.We write n for (n 1 , . . ., n k ).In terms of the alpha-factors, the likelihood for n has a very simple form: which is a multinomial distribution with parameter α.
As mentioned already, typically, for j ≥ 2, the n j are very low, with zero being quite common for larger j.In such cases, standard techniques such as maximum likelihood for estimating the alpha-factors fail to produce sensible inferences.For any inference to be reasonably possible, it has been recognized [9] that we have to rely on epistemic information, that is, information which is not just described by the data.
A standard way to include epistemic information in the model is through specification of a Dirichlet prior for the alpha-factors [9]: which is a conjugate prior for the multinomial likelihood specified in Eq. ( 5).In Eq. ( 6), we use Walley's [12, §7.7.3, p. 395] (s, t) notation for the hyperparameters.Here, s > 0 and t ∈ ∆, where ∆ is the (k − 1)-dimensional unit simplex: An interpretation for these parameters will be given shortly.First, let us calculate the posterior density for α: .
Of typical interest is for instance the posterior expectation of the probability α j of observing j of the k components failing due to a common cause given that failure occurs: where N = k j=1 n j is the total number of observations.Eq. ( 9) provides the usual well-known interpretation for the hyperparameters s and t: • If N = 0, then E(α j |s, t) = t j , so t j is the prior expected chance of observing j of the k components failing due to a common cause, given that failure occurs.• E(α j |n, s, t) is a weighted average of t j and n j /N (the proportion of j-component failures in the N observations), with weights s and N , respectively.The parameter s thus determines how much data is required for the posterior to start moving away from the prior.If N ≪ s then the prior will weigh more; if N = s, then prior and data will weigh equally; and if N ≫ s, then the data will weigh more.In particular, E(α j |n, s, t) = t j if N = 0 (as already mentioned), and E(α j |n, s, t) → nj N as N → ∞.For inference about q j , which we will discuss in Section 6, we will also need, for natural numbers p 1 , . . ., p k , with P := k j=1 p j : where (x) n , for n ∈ N 0 , denotes the raising factorial, also known as Pochhammer's symbol [1, 6.1.22,p. 256]: By linearity of expectation, Eq. ( 10) allows us to calculate the expectation of an arbitrary polynomial in α.

3.2.
Per Component Failure Rate.Now we turn to the estimation of q t , the total failure rate per component.As mentioned at the start of Section 2, we assume that failures follow a Poisson process.Suppose we observe M failures of our component over a time interval of length T .If M is sufficiently large, then a reasonable point estimate for q t would be M/T .
Often, that will be enough.However, in case M is not terribly large, we can easily propose a conjugate prior for q t .Specifically, the likelihood for M , given T , is: (12) Pr(M |q t , T ) = (q t T ) M e −qtT M ! which is simply a Poisson distribution with parameter q t T .
A standard way to include epistemic information in the model is through specification of a Gamma prior [5,10]: which is a conjugate prior for the Poisson likelihood specified in Eq. ( 12).The posterior density for q t is: Of typical interest is the posterior expectation of q t : Eq. ( 15) provides a straightforward interpretation for the hyperparameters u and v, which mimicks our discussion concerning the Dirichlet prior: 4• If T = 0, then E(q t |u, v) = v, so v is the prior expected failure rate.
• E(q t |M, T, u, v) is a weighted average of v and M/T (the empirical observed failure rate), with weights u and T , respectively.The parameter u thus determines for how long we need to observe the process until the posterior starts to move away from the prior.If T ≪ u then the prior will weigh more; if T = u, then prior and data will weigh equally; and if T ≫ u, then the data will weigh more.In particular, E(q t |M, T, u, v) = v if T = 0 (as already mentioned), and E(q t |M, T, u, v) → M T as T → ∞.

Handling Epistemic Uncertainty in Alpha-Factors
Crucial to reliable inference in the alpha-factor model is proper modelling of epistemic uncertainty about failures, which is in the above approach expressed through the (s, t) parameters.We focus on two methods for elicitation of these parameters, and the inferences that result from them.
Throughout, we will use the following example, which is taken from Kelly and Atwood [8].Consider a system with four redundant components (k = 4).The probability of j out of k failures, given that failure has happend, was denoted by α j .We assume that the analyst's prior expectation µ spec,j for each α j is: µ spec,2 = 0.030 µ spec,3 = 0.015 µ spec,4 = 0.005 (16) We have 36 observations, in which 35 showed one component failing, and 1 showed two components failing: Constrained Non-Informative Prior.Atwood [2] studied priors for the binomial model which maximise entropy (and whence, are 'non-informative') whilst constraining the mean to a specific value.Although these priors are not conjugate, Atwood [2] showed that they can be well approximated by Beta distributions, which are conjugate.Kelly and Atwood [8] applied this approach to the multinomal model with conjugate Dirichlet priors, by choosing a constrained non-informative prior for the marginals of the Dirichlet-which are Beta.This leads to an over-specified system of equalities, which can be solved via least-squares optimisation.
For the problem we are interested in, µ spec,1 is close to 1.In this case, the solution of the least-squares problem turns out to be close to: The degree of variation in the posterior under different priors is evidently somewhat alarming.In the next section, we aim to robustify the model by using sets of priors from the start.where the analyst has to specify the bounds [t j , t j ] for each j ∈ {1, . . ., k}, and [s, s].
The posterior lower and upper expectations of α j are: For the model to be of any use, we must be able to elicit the bounds.The interval [t j , t j ] simply represents bounds on the prior expectation of the chance α j .Fixed Learning Parameter.Typically, the learning parameter s is taken to be 2 (not without controversy; see insightful discussions in [13]).One might therefore be tempted to using the same prior expectations t j for the α j as above (Eq.( 16)), with s = 2, resulting in the following posterior expectations: Whence, for this example, it is obvious that s = 2 is an excessively poor choice: the posterior expectations in case of zero counts are pulled way too much towards zero.One might suspect that this is partly due to the strong prior information, that is, the knowledge of t j .However, even if we interpret the given probabilities as bounds, say: we still find: Clearly, only the posterior inferences about α 1 (and perhaps also α 2 ) seem reasonable.We conclude that the imprecise Dirichlet model with s = 2 learns too fast from the data in case of zero counts.
On the one hand, when counts are sufficiently far from zero, the posterior probability with s = 2, and perhaps even s = 1 or s = 0, seem appropriate.For zero counts, however, a larger value of s seems mandatory.Therefore, it seems logical to pick an interval for s.
A further argument for choosing an interval for s, in case of an informative set of priors, is provided by Walley [12,p. 225,§5.4.4]: a larger value of s ensures that the posterior does not move away too fast from the prior, which is particularly important for zero counts, and the difference between s and s effectively results in greater posterior imprecision if n j /N / ∈ [t j , t j ].To see this, note that, if t j ≤ n j /N ≤ t j , it follows from Eqs. ( 19) and (20) that both lower and upper posterior expectation are calculated using s.When n j /N ≤ t j (or t j ≤ n j /N ), the lower (upper) posterior expectation is calculated using s instead, which is nearer to n j /N due to the lower weight s for the prior bound t j (t j ).The increased imprecision reflects the conflict between the prior assignment [t j , t j ] and the observed fraction n j /N , and this is referred to as prior-data conflict (also see [14]).Interval for Learning Parameter.We follow Good [7, p. 19] (as suggested by Walley [12, Note 5.4.1, p. 524]), and reason about posterior expectations of hypothetical data to elicit s and s; also see [12, p. 219, §5.3.3] for further discussion on elicitation on s-our approach is similar, but simpler for the case under study.We assume that t 1 = 1 and t j = 0 for all j ≥ 2.

A ROBUST BAYESIAN APPROACH TO MODELLING EPISTEMIC UNCERTAINTY IN COMMON-CAUSE FAILURE MODELS 9
The upper probability of multiple (j ≥ 2) failed components in trial m + 1, given one (j = 1) failed component in all of the first m trials, is st j m + s (Note: there is no prior-data conflict in this case.)Whence, for the above probability to reduce to t j /2 (i.e., to reduce the prior upper probability by half), we need that m = s.In other words, s is the number of one-component failures required to reduce the upper probabilities of multi-components failure by half.
Conversely, the lower probability of one (j = 1) failed component in trial m + 1, given only multiple (j ≥ 2) failed components in the first m trials, is st 1 m + s (Note: there is strong prior-data conflict in this case.)In other words, s is the number of multicomponent failures required to reduce the lower probability of one-component failure by half.Note that, in this case, a few alternative interpretations present themselves.First, for j ≥ 2, m + st j m + s In other words, s is also the number of j-component failures required to increase the upper probability of j components failing to (1 + t j )/2 (generally, this will be close to 1/2, provided that t j is close to zero).Secondly, for j ≥ 2, E(α j |n j = m, N = m, H) = m m + s so s is also the number of multi-component failures required to increase the lower probability of multi-component failures to a half.
Any of these counts seem well suited for elicitation, and are easy to interpret.As a guideline, we suggest the following easily remembered rules: • s is the number of one-component failures required to reduce the upper probabilities of multi-component failures by half, and • s is the number of multi-component failures required to reduce the lower probability of one-component failures by half.Taking the above interpretation, the difference between s and s reflects the fact that the rate at which we reduce upper probabilities is less than the rate at which we reduce lower probabilities, and thus reflects a level of caution in our model.Coming back to our example, reasonable values are s 1 (if we immediately observe multicomponent failures, we might be quite keen to reduce our lower probability for one-component failure) and s = 10 (we are happy to halve our upper probabilities of multi-component failures after observing 10 one-component failures).With these values, when taking for t j the values given in Eq. ( 16), we find the following posterior lower and upper expectations of α j : These bounds indeed reflect caution in inferences where zero counts have occurred (j = 3 and j = 4), with upper expectations considerably larger as compared to the model with fixed s, while still giving a reasonable expectation interval for the probability of one-component failure.
If we desire to specify our initial bounds for t j more conservatively, as in Eqs.(21), we find similar results:

Handling Epistemic Uncertainty in Marginal Failure Rate
Before we can consider inferences on the common-cause failure rates q j , we will briefly explain how we express epistemic uncertainty on the marginal failure rate q t .As seen in Section 3.2, we will use conjugate Gamma priors with hyperparameters u and v, where v is the prior failure rate parameter, and u determines the learning speed.Similarly to the alpha-factor case, we can express vague prior information on q t by considering sets of priors, which are generated by sets of hyperparameters, i.e., we specify a parameter set J ⊆ (0, ∞) × (0, ∞).Unlike Section 4.2.1, here J = {u} × (0, ∞), for some fixed value of u, does not lead to a practically useful near-ignorant set of priors, as then E(q t |M, T, J ) = ∞ for any M and T .In practice, it should not be a big issue to find bounds [v, v] for the prior expected marginal failure rate.
Similarly to Eqs. ( 19) and (20), when the posterior lower and upper expectations of q t are To elicit bounds for the learning parameter u, similar considerations as in Section 4.2.2 can be made.Assuming v = 0, the posterior lower expectation for q t is (27) E(q t |M, T, J ) = M T + u (Note: there is no prior-data conflict in this case.)Whence, u is the amount of time needed to observe the process until we raise the lower expectation of q t from 0 to half of the observed failure rate M/T .
Conversely, assuming v > 0, and no failures at all during time T , the posterior lower expectation for q t is (28) E(q t |M = 0, T, J ) = uv T + u = v T u + 1 (Note: prior-data conflict is present in this case.)Whence, u is the time needed to observe the process-without any failures-until v is reduced by half.
Contrary to the situation in Section 4, zero counts are much less of a concern when estimating the marginal failure rate.Whence, for sake of simplicity, it might therefore suffice to consider parameter sets of the form (29) only.Both Eqs. ( 27) and (28) can then serve to determine u = u = u.
A numerical example will be given in Section 7.
A  1. Accuracy of first and second order Taylor approximations.
6. Inference on Failure Rates 6.1.Expected Failure Rates.For inference on the failure rates q j , we will now combine our models for alpha-factors and marginal failure rate by using Eq. ( 4).The problem in doing this is that there is, as far as we know, no immediate closed expression for the posterior expectation of q j , because Eq. ( 4) is a rational function of α.However, naively, we can approximate it using Taylor expansion.Specifically, and, as long as k ℓ=2 (ℓ − 1)α ℓ < 1-this is always true if k ≤ 2; for larger k, it is usually true because α ℓ is usually very small for ℓ ≥ 3-we can use the Taylor expansion 1/(1 + x) = 1 − x + x 2 − x 3 + . . .(valid for |x| < 1), to arrive at: The posterior expectation of Eq. (33) can now be evaluated, using Eqs.(10) and (15), under the usual assumption that q t is independent of the alpha-factors.
To get a better idea of accuracy, Table 1 tabulates first and second order approximations.For example, second order approximation remains fairly accurate for k ℓ=2 (ℓ − 1)α ℓ < 0.5, and first order approximation for k ℓ=2 (ℓ − 1)α ℓ < 0.3.An obvious issue with Taylor approximation is that the domain of integration includes values for α where the Taylor series does not converge.However, it is easy to see that, for any x ≥ 0 (not just those for which |x| < 1): for even p, and (34) Then, using the 4th order Taylor approximation of g 2 (α) as explained in Section 6.1, A similar analysis for q 1 yields: (61) 0.318 = (0.595 − 0.003) × 0.538 ≤ E(q 1 |n, M, T, H, J ) ≤ E(q 1 |n, M, T, H, J ) ≤ (0.643 + 0.003) × 0.577 = 0.373 or in other words, single failures occur at an expected rate that lies between 0.318 and 0.373 per year.
In this simple example with two redundant components, posterior imprecision for the single failure rate is similar to the posterior imprecision for the double failure rate.This is essentially a special feature of the two component case, because it must hold that α 1 + α 2 = 1 when k = 2.In case of larger k, the differences in posterior imprecision between common-cause failure rates will be considerably larger, as in the numerical examples of Section 4.2, where, for instance, in case of Eq. ( 24), E(α j |n, H) − E(α j |n, H) ranges from 0.001 to 0.011.

Conclusion
We studied elicitation of hyperparameters for inferences that arise in the alpha-factor representation of the basic parameter model.For the hyperparameters of the Dirichlet prior for the alpha-factors, we argued that bounds, rather than precise values, are desirable, due to inferences being strongly sensitive to the choice of prior distribution, particularly when faced with zero counts.We concluded that assigning an interval for the learning parameter is especially important.In doing so, we effectively adapted the imprecise Dirichlet model [13] to represent epistemic uncertainty in the alpha-factors.
For the marginal failure rate, the second part of the model, we proposed a set of Gamma priors with similar properties as the set of Dirichlet priors used for the alpha-factors.As zero counts are generally not an issue for this part of the model, it may suffice to consider a fixed learning parameter here.
We identified simple ways to elicit information about the hyperparameters, by reasoning on hypothetical data, rather than by maximum entropy arguments as done in an earlier study [2,8] on the estimation of alpha-factors.Essentially, the analyst needs to specify how quickly he is willing to learn from various sorts of hypothetical data.
Taking everything together, we arrived at a powerful procedure for analysing the influence of epistemic uncertainty on all common-cause failure rates, the central quantities of interest in the basic parameter model.As there is no immediate closed-form solution for the expectation of these failure rates, we presented an approximation based on Taylor expansion, and quantified the error of the approximation at any order.
By allowing the analyst to specify bounds for all hyperparameters, along with clear interpretations of these bounds, we effectively provided an operational method for full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the model.The procedure was illustrated by means of a simple electrical network example, demonstrating its feasability and usefulness.
In the paper, we chose the sets of hyperparameters to be of a very specific convex form (Eqs. ( 18) and ( 29)).This led to simple calculations (at least for this problem), and made elicitation fairly straightforward.Nevertheless, other shapes could still provide a better fit to any given epistemic information, and perhaps also have better updating properties.Such shapes may, however, be more difficult to elicit.More general shapes for sets of Beta priors are discussed in [15].Already for Beta priors, elicitation of these shapes is non-trivial, and provides an interesting challenge.We leave a thorough study of such issues, for Dirichlet and Gamma priors, to future work.
Another aspect we neglected in this paper is the calculation of (imprecise) credible intervals.We expect that some clever approximation procedure may be needed.

)
For our example, this means that s = 10 [8, p. 400, §3].An obvious calculation reveals that, under this prior [8, p. 401, §3.1]: §4] compare these results against a large number of other choices of priors, and note that the posterior resulting from Eq. (17) seems too strongly influenced by the prior, particularly in the presence of zero counts.For instance, the uniform prior is a Dirichlet distribution with hyperparameters t j = 0.25 and s = 4, which gives: 4.2.Imprecise Dirichlet Model.4.2.1.Near-Ignorance Model.In case no prior information is available, Walley proposes as a socalled near-ignorance prior a set of Dirichlet priors, with hyperparameters constrained to the set: H = {(s, t) : t ∈ ∆} for some fixed value of s, which determines the learning speed of the model [12, p. 218, §5.3.2][13, p. 9, §2.3].4.2.2.General Model.When prior information is available, more generally, we may assume that we can specify a subset H of (0, +∞) × ∆.Following Walley's suggestions [12, p. 224, §5.4.3] [13, p. 32, §6], we take (18) H = (s, t) : s ∈ [s, s], t ∈ ∆, t j ∈ [t j , t j ] ROBUST BAYESIAN APPROACH TO MODELLING EPISTEMIC UNCERTAINTY IN COMMON-CAUSE FAILURE MODELS 11