On the Estimation of Reliability of Weighted Weibull Distribution: A Comparative Study

The paper gives a description of estimation for the reliability function of weighted Weibull distribution. The maximum likelihood estimators for the unknown parameters are obtained. Nonparametric methods such as empirical method, kernel density estimator and a modified shrinkage estimator are provided. The Markov chain Monte Carlo method is used to compute the Bayes estimators assuming gamma and Jeffrey priors. The performance of the maximum likelihood, nonparametric methods and Bayesian estimators is assessed through a real data set.


Introduction
The Weibull distribution has been used very extensively as a model in reliability and survival analysis.The distribution provides much wider applications as compared with those provided by the exponential distribution.The Weibull distribution can also be used as an alternative to other distributions used in reliability engineering and life testing such as gamma and lognormal distributions.Following the method of Azzalini (1985), Gupta and Kundu (2009) proposed a weighted exponential distribution which can also be used as an alternative to gamma and Weibull distributions.Using the same idea, Shahbaz et al. (2010) proposed the weighted Weibull distribution.
Suppose two random variables X 1 and X 2 are independently identically distributed as Weibull random variable with distribution function F(x) = 1 − e −λx β and density function f (x) = λβx β−1 e −λx β , where β is the shape parameter and λ is the scale parameter.The density function of the weighted Weibull distribution is given by ), x > 0, α, β, λ > 0. (1) The reliability, or survival function associated with (1) is It should be mentioned that for β = 1, the distribution f (x) given by ( 1) is reduced to the weighted exponential distribution of Gupta and Kundu (2009).The hazard rate has decreasing trend when β < 1 and increasing trend for values of β > 1.
The model can be considered as another useful two-parameter generalization of the Weibull distribution.This lifetime distribution can model various shapes of failure rates and hence various shapes of aging criteria.In the work of Shahbaz et al. (2010) some properties such as reliability function, hazard function and moment generating function are discussed.Also, estimation of the unknown parameters of the weighted Weibull is discussed, but making comparisons basing on different methods of estimation has not been performed.The main goal of this paper is to estimate the parameters using maximum likelihood and Bayesian method and then make use of the estimated parameters to estimate the reliability function.For the same purpose, some nonparametric methods like empirical method, kernel density estimator and a modified shrinkage estimator are used.
The rest of the paper is organized as follows.In Section 2, the maximum likelihood estimators (MLEs) are obtained.
In Section 4, we obtain Bayes estimators using the symmetric and asymmetric balanced loss functions.In Section 5, the MCMC methods are used to accomplish some complex calculations, and, therefore, comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation study.

Methods of Estimation
In this section, we consider the estimation problem of the weighted Weibull distribution.We discuss the maximum likelihood estimators, empirical method, kernel density estimator and modified shrinkage estimators.

Maximum Likelihood Estimator
Suppose that X is a random variable distributed according to weighted Weibull distribution, X ∼ WW(α, β, λ).Suppose further that x 1 < x 2 < • • • < x n denote the observed failure times of the experimental units and θ = (α, β, λ).The likelihood function is given by The log-likelihood function of the WW(α, β, λ) is given by The MLEs of (α, β, λ), say ( αML , βML , λML ), are obtained as the solution of the Fishers score function.Setting the equations to zero and solving for α, β and λ leads to the MLEs.
Usual algebraic solution for the above equations is not working due to the properties of transcendental equation.Therefore, we propose to use numerical methods to compute the MLEs.We have used nlm() function of R package.The corresponding "ML plug-in estimation" of F, say F, is given by (5)

Kernel Density Estimator
Here we attempt to estimate the density directly from the data without assuming a particular form for the underlying distribution.Let X 1 , X 2 , • • • , X n denote a sample of size n from a random variable with density f .The kernel density estimate of f at the point x is given by fh where the kernel K satisfies ∫ K(x)dx = 1 and the smoothing parameter h is known as the bandwidth.In practice, the kernel K is generally chosen to be a unimodal probability density symmetric about zero.In this case, any function K having the following assumptions can be used as a kernel: ∫ K(y)dy = 1, ∫ yK(y)dy = 0, and The Gaussian kernel is a popular choice for K.The selection of smoothing parameter, or bandwidth h, for the kernel density is very crucial because it effects on the shape of the corresponding estimator.If the bandwidth is small, we will get an under smoothed estimator, with high variability.On the other hand, if the value of h is big, the resulting estimator will be over smooth.We will use the optimal bandwidth to estimate h, which can be given as (see e.g.Marron and Chung 2001) where σ is the standard deviation and Q provide the interquartile range of X.
When the data are complete (uncensored), a kernel estimate (KE) for the survival function associated with ( 6) is given by

Empirical Estimator
We consider nonparametric estimation method which is based on the empirical distribution function (EDF).The EDF is a step function with jumps at the order statistics (X 1 , X 2 , . . ., X n ) and defined as where I is an indicator function.By Glivenko-Cantelli theorem the EDF, The reliability function is estimated as the proportion of observations surviving longer than x i.e.
Note that the most common way to estimate the reliability function nonparametrically is the Kaplan-Meier (K-M) method.However, if there is no censoring, as in our present case, the K-M estimate coincides with the empirical survival function.

Modified Shrinkage Estimator
Jani (1991) suggested a class of shrinkage estimators for the scale parameter of the exponential distribution.The estimator of θ, say T p , is as follows: where x is the sample mean, p is a non-zero real number and W is constant such that the mean square error of T p is at minimum.The constant W can be estimated by Ŵ where This shrinkage technique can be adopted and used to present a shrinkage estimator for the reliability function.
where Fh (x) is defined as given in relation ( 8) and it is considered to be an initial value for the reliability function, and Fn (x) is the empirical reliability.
It is obvious that for different values of p one can obtain many more shrinkage estimators.Also, it should be mentioned here that the class of shrinkage estimators given by ( 12) is not unique.

Bayesian Approaches
The posterior expectations involve integrals which sometimes can not be obtained in closed forms.To treat this problem we employ the MCMC technique to compute the Bayes estimates for the involved parameters.

Gamma Prior
Bayesian approach requires to specify the prior probability distribution of the unknown parameters.We assume that α, β and λ have independent gamma prior distributions, i.e.
The hyper parameters a 1 , a 2 , a 3 , b 1 , b 2 and b 3 are assumed to be known and non-negative.Then the joint prior distribution of α, β and λ can be written as Based on the prior distributions given by ( 15), the joint density function of the sample observations and the parameters α, β and λ becomes The joint posterior density function of α, β and λ, given the data can be obtained from The Bayes estimators under the squared error loss function (SELF) is the posterior mean, θ = E π(θ|data) (θ).It is not possible to compute (17) analytically and therefore the Bayes estimates of the parameters under the SELF.For this reason, we propose to use Metropolis-Hastings algorithm, one of the MCMC methods, to obtain samples from the posterior distribution and then to compute the Bayes estimates of α, β and λ.

Jeffreys Prior
A well-known prior to represent a situation where no much information about the parameters was proposed by Jeffreys (1967).This prior, denoted by π J (α, β, λ), is derived from the Fisher information matrix I(α, β, λ) given as The joint posterior density function of α, β and λ, given the data can be obtained from Also, it is not possible to compute (18) analytically and therefore the Bayes estimates of the parameters under the SELF.Thus, we use Metropolis-Hastings algorithm to obtain samples from the posterior distribution and then to compute the Bayes estimates of α, β and λ.

Simulation and Data Analysis
Instead of drawing direct samples from Bayesian posterior distribution, which is not often an easy task, one may use the Metropolis-Hastings algorithm, a general term of Markov chain simulation methods.This method is an extension of the usual rejection-acceptance sampling method.For a comprehensive treatment on MCMC methods, Metropolis-Hastings algorithm, one may refer to the book by Robert and Casella (2005), Hastings (1970) and Cowles and Carlin (1995).The algorithm is proposed as follows (See Al-Zahrani Gindwan 2014): Step 1. Draw starting points θ 0 = (α 0 , β 0 , λ 0 ) at timestep i = 0, for which f (θ 0 |data) > 0, from a prior distribution π(θ).
Step 2. At iteration i, draw a proposal θ * from a jumping distribution J i (θ * |θ (i−1) ), where J i is symmetric.
Step 3. Generate a sample u from the uniform distribution U(0, 1) and take z = log u.
At the length of 40,000, we produce the Markov chain with different initial points of the involving parameters.The convergence is adjusted by drawing trace and ergodic mean plots.It is noted that the Markov chains converge after approximately 2000 observations.Therefore, burn-in of 5000 samples is quite enough to eliminate the effect of initial values.In order to minimize the auto correlation among the generated deviates, we take samples of size 3500 from the posterior with thin = 10, and starting from 3501.
The estimates of the parameters, maximum likelihood estimator (MLE), Bayes with gamma prior (BGP) and Bayes with Jeffreys prior (BJP), are given in Table 1.Also, the Kolmogorov-Smirnov D n and Cramér-von Mises W 2 n tests statistics are computed and given in Table 1.The results show that the test statistics take the smallest values for the data set under MLEs with regard to the Baysian methods.The plug-in MLE, empirical method (EM), kernel estimator (KE), modified shrinkage estimator (MSE) and the plug-in Bayes estimators are given in Table 2 and shown in Figure 1.The findings show that all the methods of estimation considered in this example are precisely estimating the parameters and their performance are quite similar.The Bayes estimator with gamma prior reaches 0, as the value of X gets larger, faster than those obtained by the Jeffreys prior, nonparametric estimators and MLEs.

Concluding Remarks
In this paper we have provided the maximum likelihood plug-in estimator, empirical estimator, kernel density estimator and a modified shrinkage estimator for the weighted Weibull distribution.The Markov chain Monte Carlo method is used to compute the Bayes estimators assuming the prior distribution of the parameters are gamma and Jeffreys priors.To assess the performance of the obtained estimators we analyzed a real data set.

Table 1 .
The estimates of the parameters α, β and λ for Guinea pigs data.