Minimax Estimation of Parameter of Generalized Exponential Distribution under Square Log Error Loss and Mlinex Loss Functions

The aim of this paper is to study the minimax estimation of the generalized exponential distribution. Bayesian estimators and minimax estimators of the parameter of the generalized exponential distribution have been obtained under the well known weighted square error loss, square log error loss and Modified Linear Exponential (MLINEX) loss functions. Finally, Monte Carlo simulation is used to compare the efficiency of these minimax estimators by using mean square error.


INTRODUCTION
Generalized exponential distribution has been used as an alternative to gamma and Weibull distributions in many situations by Gupta and Kundu (1999).In recent years, an impressive array of papers has been devoted to study the behavioral patterns of the parameters of the generalized exponential distribution using both classical and Bayesian approach, a very good summary of this work can be found in Gupta andKundu (1999, 2008), Raqab and Madi (2005) and Singh et al. (2008) and the references cited there for some recent developments on GE distribution.
Suppose X be a random variable from generalized exponential distribution (GE (θ)) if the Probability Density Function (PDF) is given by: where, θ is the shape parameter.
The minimax estimation is an upgraded nonclassical approach in the estimation area of statistical inference, which has drawn great attention by many researchers.Podder et al. (2004) studied the minimax estimator of the parameter of the Pareto distribution under Quaratic and MLINEX loss functions.Dey (2008) studied the minimax estimator of the parameter of Rayleigh distribution.Shadrokh and Pazira (2010) studied the minimax estimator of the parameter of the minimax distribution under several loss functions.
In this study, weighted square error loss, log error squared error and MLINEX loss functions have been used to obtain the minimax estimators of the parameter of the GE (θ) distribution.

PRELIMINARIES
Let X 1 , X 2 , …, X n be a sample from the generalized exponential distribution (1) and ) , , , ( The Likelihood Function (LF) of θ for the given sample observation is: where, ∑ ln 1 is the observation of ∑ ln 1 .The maximum likelihood estimator of θ is easily derived as: And we can also show that ∑ ln 1 is distributed the Gamma distribution Γ n, θ .In the Bayes approach, we further assume that some prior knowledge about the parameter θ is available to the investigation from past experience with the exponential model.The prior knowledge can often be summarized in terms of the so-called prior densities on parameter space of θ.In the following discussion, we assume the following Jeffrey's non-informative prior density defined as: or ∞ .This loss is not always convex, it is convex for and concave otherwise, but its risk function has minimum w.r.t.: The MLINEX loss function: Varian (1975) and Zellner (1986) proposed an asymmetric loss function known as the LINEX function, which is suitable for situations where overestimation is more costly than underestimation.
When estimating a parameter θ by , the MLINEX is given by (Podder et al., 2004): Remark 1: For any prior distribution of θ, under the MLINEX loss function (8), we can show that the Bayes estimator of θ, denoted by , is given by: Bayes estimation: Combining (3) with non-informative prior (5) using Bayes theorem, the posterior pdf of θ is given by: . Then: • The Bayes estimator under the weighted square error loss function is given by: • Using (10): Then the Bayes estimator under the squared log error loss function is come out to be: • Using (10), the Bayes estimator under the MLINEX loss function is obtained as:

MINIMAX ESTIMATION
The derivation of minimax estimators depends primarily on a theorem due to Lehmann which can be stated as follows: Lemma 1: (Lehmann's Theorem) (Brown, 1968)  Theorem 1: Let X 1 , X 2 , …, X n be a random sample drawn from the density (1), then is the minimax estimator of the parameter θ for the weighted square error loss (6).
Theorem 2: Let X 1 , X 2 , …, X n be a random sample drawn from the density (1), then is the minimax estimator of the parameter θ for the squared log error loss (7).
Theorem 3: Let X 1 , X 2 , …, X n be a random sample drawn from the density (1), then: is the minimax estimator of the parameter θ for the MLINEX loss (8).
Proof: First we have to prove the theorem 1.To prove the theorem we shall use Lehmann's theorem, which has been stated before.For this, first we have to find the Bayes estimator δ of θ.Then if we can show that the risk of d is constant, then the theorem 1 will be proved.The risk function of the estimator is: Then, which is a constant.So, according to the Lehmann's theorem it follows that, is the minimax estimator for the parameter θ of the generalized exponential distribution under the quadratic loss function ( 7).Now we are going to prove the theorem 2. The risk function of the estimator: Using the fact: . Then we can show that: [(ln ) ] 2 ln ( ( ) ln ) (ln ) Then we get the fact: , which is a constant.So, according to the Lehmann's theorem it follows that, is the minimax estimator for the parameter θ of the generalized exponential distribution under the squared log error loss function (8).Now we give the proof of the case of Theorem 3. The risk function under the MLINEX loss function ( 9) is: where is the estimate at the i th run.

CONCLUSION
The estimated values of the parameter and ER of the estimators are computed by the Monte-Carlo Simulation method from the generalized exponential distribution (1) with 0 . 1 = θ . It is seen that for small sample sizes (n<50), minimax estimators for MLINEX loss function when c = 1.0 appear to be better than the other minimax estimators.But for large sample sizes (n>50), all the three estimators have approximately the same ER.The obtained results are demonstrated in the Table 1.

R
equals constant on Θ ; then • δ is a minimax estimator of θ.