The Poisson-Lomax Distribution

In this paper we propose a new three-parameter lifetime distribution with upside-down bathtub shaped failure rate. The distribution is a compound distribution of the zero-truncated Poisson and the Lomax distributions (PLD). The density function, shape of the hazard rate function, a general expansion for moments, the density of the rth order statistic, and the mean and median deviations of the PLD are derived and studied in detail. The maximum likelihood estimators of the unknown parameters are obtained. The asymptotic confidence intervals for the parameters are also obtained based on asymptotic variance-covariance matrix. Finally, a real data set is analyzed to show the potential of the new proposed distribution.

1. Introduction Marshall & Olkin (1997) introduced an effective technique to add a new parameter to a family of distributions.A great deal of papers have appeared in the literature used this technique to propose new distributions.In their paper, Marshall & Olkin (1997) generalized the exponential and Weibull distributions.Alice & Jose (2003) followed the same approach and introduced Marshall-Olkin extended semi-Pareto model and studied its geometric extreme stability.Ghitany, Al-Hussaini & Al-Jarallah (2005) studied the Marshall-Olkin Weibull distribution and established its properties in the presence of censored data.Marshall-Olkin extended Lomax distribution was introduced by Ghitany, Al-Awadhi & Alkhalfan (2007).Compounding Poisson and exponential distributions have been considered by many authors; e.g.Kus (2007) proposed the Poisson-exponential lifetime distribution with a decreasing failure rate function.Al-Awadhi & Ghitany (2001) used the Lomax distribution as a mixing distribution for the Poisson parameter and obtained the discrete Poisson-Lomax distribution.Cancho, Louzada-Neto & Barriga (2011) introduced another modification of the Poisson-exponential distribution.
Let Y 1 , Y 2 , . . ., Y Z be independent and identically distributed random variables each has a density function f , and let Z be a discrete random variable having a zero-truncated Poisson distribution with probability mass function , z ∈ {1, 2, . ..}, λ > 0. (1) Suppose that X is a random variable representing the lifetime of a parallel-system of Z components, i.e.X = max{Y 1 , Y 2 , . . ., Y z }, and Y 's and Z are independent.
The conditional distribution function of X|Z has the probability density function (pdf) where F (x) is the cumulative distribution function (cdf) corresponding to f (x).
The reliability and the hazard rate functions of X are, respectively, given by In this paper we propose a new lifetime distribution by compounding Poisson and Lomax distributions.As we have mentioned in the previous chapters, the Lomax distribution with two parameters is a special case of the generalized Pareto distribution, and ti is also known as the Pareto of the second type.A random variable X is said to have the Lomax distribution, abbreviated as X ∼ LD(α, β), if it has the pdf Here α and β are the shape and scale parameters, respectively.Analogous tu above, the survival and hazard functions associated with ( 6) are given by The rest of the paper is organized as follows.In Section 2, we give explicit forms and interpretation for the distribution function and the probability density function.In Section 3, we discuss the distributional properties of the proposed distribution.Section 4 discusses the estimation problem using the maximum likelihood estimation method.In Section 5, an illustrative example, model selections, goodness-of-fit tests for the distribution with estimated parameters are all presented.Finally, we conclude in Section 6.

Model Formulation
Substitution of ( 7) in (4) yields the following reliability function: The pdf associated with ( 9) is expressed in a closed form and is given by The density function given by (10) can be interpreted as a compound of the zerotruncated Poisson distribution and the Lomax distribution.Suppose that X = max{Y 1 , Y 2 , . . ., Y z }, and each Y is distributed according to the Lomax distribtion.
Revista Colombiana de Estadística 37 (2014) 225-245 The variable Z has zero-truncated Poisson distribution and the variables Y 's and Z are independent.Then the conditional distribution function of X|Z has the pdf The joint distribution of the random variables X and Z, denoted by f X,Z (x, z), is given by the marginal pdf of X is as follows.
which is the distribution with the pdf given by (10).The distribution of X may be referred to as the Poisson-Lomax distribution.Symbolically it is abbreviated by X ∼ P LD(α, β, λ) to indicate that the random variable X has the Poisson-Lomax distribution with parameters α, β and λ.

Distributional Properties
In this section, we study the distributional properties of the PLD.In particular, if X ∼ P LD(α, β, λ) then the shapes of the density function, the shapes of the hazard function, moments, the density of the rth order statistics, and the mean and median deviations of the PLD are derived and studied in detail.

Shapes of pdf
The limit of the Poisson-Lomax density as x → ∞ is 0 and the limit as x → 0 is αβλ/(e λ − 1).The following theorem gives simple conditions under which the pdf is decreasing or unimodal.
(iii) The mode of the Poisson-Lomax distribution is given by Figure 1 shows the pdf curves for the P LD(α, β, λ) for selected values of the parameters α, β and λ.From the curves, it is quite evident that the PLD is positively skewed distribution.It becomes highly positively skewed for large values of the involved parameters.

Hazard Rate Function
The hazard rate function (hrf) of a random variable X is defined by h The following theorem gives simple conditions under which the hrf, given in ( 14), is decreasing or unimodal.Proof .The first derivative of h(x) with respect to x is given by h where η(y) = −(α + 1) + (α + 1 − αλy)e λy , and y = (1 + βx) −α < 1.The remaining of the proof is similar to that of Theorem 1.
Note 2. The following should be noted.
(ii) For λ > 1, h(x) may still exhibit a decreasing behavior, depending on the values of α and λ such that then a decreasing hrf implies decreasing pdf.The converse is not necessarily true, e.g.α = 2, λ = 2 implies decreasing pdf but unimodal hrf.
Figure 2 shows the hrf curves for the P LD(α, β, λ) for selected values of the parameters α, β and λ.

Moments
We present an infinite sum representation for the rth moment, µ r = E [X r ], and consequently the first four moments and variance for the PLD.
Theorem 3. The rth moment about the origin of a random variable X, where X ∼ P LD(α, β, λ), and α, β, λ > 0, is given by the following: Proof .The rth moment of X can be determined by direct integration using the pdf, i.e. µ r = x r f (x)dx.We use the Maclaurin expansion of e x = ∞ n=0 x n /n!, for all x.We also use the series representation where k is a positive integer.
Therefore, after some transformations and integrations we have This completes the proof of the theorem.
An alternative representation formula for (15) can readily be found by expanding and substituting in the binomial expansion.
One may use this representation to obtain the mean and the variance of X.
Based on the results given in relations (17), the variance of X, denoted by σ 2 = µ 2 − µ 2 is given by It can be noticed from Table 1 that both the mean and the variance of the PL distribution are decreasing functions of α and β but they are increasing in λ.Table 2 shows the skewness and kurtosis of the PLD for various selected values of the parameters α, β and λ.The skewness is free of parameter β.Both the skewness and kurtosis are decreasing functions of α and both are increasing of λ.

L-moments
Suppose that a random sample X 1 , X 2 , . . ., X n is collected from X ∼ P LD(θ), where θ = (α, β, λ).In what follows, we derive a general representation for the L-moments of X.
The rth population L-moments is given by Let y = (1 + βx) so x = (y − 1)/β and dx = (1/β)dy.After some transformation, we arrive to the formula: where A ij is One readily can use the relation ( 18) to obtain the first L-moments of X r:n .For example, we take r = n = 1 to obtain λ 1 = E [X 1:1 ] which is the mean of the random variable X.
This result is consistent with that obtained in relation ( 17).The other two L-moments, λ 2 and λ 3 , are respectively given by Revista Colombiana de Estadística 37 (2014) 225-245 and The method of L-moments consists of equating the first L-moments of a population, λ 1 , λ 2 and λ 3 , to the corresponding L-moments of a sample, l 1 , l 2 and l 3 , thus getting a number of equations that are needed to be solved, numerically, in terms of the unknown parameters, θ.

Order Statistics
Let X 1 , X 2 , . . ., X n be a random sample of size n from the PL distribution in (10) and let X 1:n , . . ., X n:n denote the corresponding order statistics.Then, the pdf of X r:n , 1 ≤ r ≤ n, is given by (see, David & Nagaraja 2003, Arnold, Balakrishnan & Nagaraja 1992) where C r,n = [B(r, n − r + 1)] −1 , with B(a, b) being the complete beta function.
Theorem 4. Let G(x) and g(x) be the cdf and pdf of a Poisson-Lomax distribution for a random variable X.The density of the rth order statistic, say g (r) (x) is given by Proof .First it should be noted that ( 19) can be written as follows: then the proof follows by replacing the reliability, Ḡ(x), and the pdf, g(x), of X ∼ P LD(α, β, λ) which are obtained from ( 9) and ( 10), respectively, and substituting them into relation ( 21), and expanding the term (1 − e −λ(1+βx) −α ) n−r+i using the binomial expansion.
Theorem 5. Let X be a random variable distributed according to the PL distribution.Then the mean deviation about the mean, δ 1 , and the mean deviation about the median, δ 2 , are given as follows: and Proof .The proof follows by plugging the density function of the PLD into equation ( 23) and working out the integration I, where Setting y = 1 + βx, so dy = βdx and using the expansion e x = ∞ n=0 x n /n!, yields Substituting I into relation ( 23) and manipulating the other terms gives directly the desired result.Similarly, the measure δ 2 (M ) can be obtained.

Estimation
In this section we consider maximum likelihood estimation (MLE) to estimate the involved parameters.Asymptotic distribution of θ = (α, β, λ) are obtained using the elements of the inverse Fisher information matrix.

Maximum Likelihood Estimation
The idea behind the maximum likelihood parameter estimation is to determine the parameters that maximize the probability (likelihood) of the sample data.For this purpose, let X 1 , X 2 , . . ., X n is be random sample from X ∼ P LD(θ), where θ = (α, β, λ).Then the likelihood function of the observed sample is given by The log-likelihood function is given by The MLEs of α, β and λ say α, β and λ, respectively, can be worked out by the solutions of the system of equations obtained by letting the first partial derivatives of the total log-likelihood equal to zero with respect to α, β and λ.Therefore, the system of equations is as follows: For simplicity, we define A i to be as A i = 1 + βx i .Thus, we have The solutions of nonlinear equations ( 29), ( 30) and ( 31) are complicated to obtain, therefore an iterative procedure is applied to solve these equations numerically.

Asymptotic Distribution
We obtain the asymptotic distribution of θ = (α, β, λ).The asymptotic variances of MLEs are given by the elements of the inverse of the Fisher information matrix.The Fisher information matrix of θ, denoted by J (θ) = E(I, θ), where I ij , i, j = 1, 2, 3 is the observed information matrix.The second partial derivatives of the maximum likelihood function are given as the following: The exact mathematical expressions for J (θ) = E(I, θ) are complicated to obtain.Therefore, the observed Fisher information matrix can be used instead of the Fisher information matrix.The variance-covariance matrix may be approximated as V ij = I −1 ij .The asymptotic distribution of the maximum likelihood can be written as follows (see Miller 1981).
Since V involves the parameters α, β and λ, we replace the parameters by the corresponding MLEs in order to obtain an estimate of V , which is denoted by V .By using (32), approximate 100(1 − ϑ)% confidence intervals for α, β and λ are determined, respectively, as where Z ϑ is the upper 100ϑ-th percentile of the standard normal distribution.
In the order to numerically illustrate the estimation of the involved parameters, we have simulated the ML estimators for different sample sizes.The calculation of the estimation is based on 10, 000 simulated samples from the standard PLD.Table 4 shows the MLEs, mean squared errors (MSE) and 95% confidence limits (LCL & UCL ) for the parameters α, β, and λ.The true values of the parameters used for simulation were α = 1, β = 1, and λ = 2.It is observed that when the sample size n increases, the MLE of α and λ decrease to approach the true one while the MLEs of the parameters β increase.

Figure 1 :
Figure 1: Plot of the probability density function for different values of the parameters α, β and λ.

Figure 2 :
Figure 2: Plot of the hazard function for different values of the parameters α, β and λ.

Table 1 :
Mean and variance of PLD for various values of α, β and λ.

Table 2 :
Skewness and kurtosis of PLD for various values of α, β and λ.

Table 5 :
MLEs (standard errors in parentheses) and the measures AIC, BIC, HQIC and CAIC.