Statistical Inference of Truncated Normal Distribution Based on the Generalized Progressive Hybrid Censoring

In this paper, the parameter estimation problem of a truncated normal distribution is discussed based on the generalized progressive hybrid censored data. The desired maximum likelihood estimates of unknown quantities are firstly derived through the Newton–Raphson algorithm and the expectation maximization algorithm. Based on the asymptotic normality of the maximum likelihood estimators, we develop the asymptotic confidence intervals. The percentile bootstrap method is also employed in the case of the small sample size. Further, the Bayes estimates are evaluated under various loss functions like squared error, general entropy, and linex loss functions. Tierney and Kadane approximation, as well as the importance sampling approach, is applied to obtain the Bayesian estimates under proper prior distributions. The associated Bayesian credible intervals are constructed in the meantime. Extensive numerical simulations are implemented to compare the performance of different estimation methods. Finally, an authentic example is analyzed to illustrate the inference approaches.


Truncated Normal Distribution
Normal distribution has played a crucial role in a diversity of research fields like reliability analysis and economics, as well as many other scientific developments. However, in many practical situations, experimental data are available from a certain range, so the truncated form of normal distribution is more applicable in actual life.
The truncated normal distribution, with its characteristic of practical, has gained some attention among researchers and interesting results have been obtained. Ref. [1] studied the maximum likelihood estimations for singly and doubly truncated normal distributions. Ref. [2] applied the approach of moments to estimate unknown parameters of singly truncated normal distributions from the first three sample moments. Ref. [3] investigated the estimators of unknown parameters from the normal distribution under the singly censored data. Ref. [4] utilized an iterative procedure to transform singly right censored samples into pseudo-complete samples and then estimated the parameters of interest through the transformed data. One can refer to [5] to find more details about the truncated normal distribution. Ref. [6] adopted the maximum likelihood estimation and Bayesian methods to estimate the unknown parameters for the truncated normal distribution under the progressive type-II censoring scheme. Meanwhile, optimal censoring plans under different optimality criteria had been discussed. While the above-mentioned works are based on the known truncation point. Ref. [7] developed a standard truncated normal distribution. The truncated mean and variance of this distribution are zero and one, respectively, regardless of the location of the truncation points. Ref. [8] considered the maximum likelihood estimators of unknown parameters with the known truncation point and the unknown truncation point respectively.
Generally, lifetime data are non-negative and under this circumstance, the left-truncated normal distribution (the truncation point is zero) can be applied to investigate statistical inference of unknown parameters.
Provided that a variable X is subject to a left-truncated normal distribution TN(µ,τ), which has a probability density function (pdf) when truncation ranges from zero to infinity.
where µ > 0 and τ > 0, µ is the mean of the corresponding normal distribution and τ is the variance accordingly. Φ(.) is the cumulative distribution function of the standard normal distribution. The corresponding cumulative distribution function (cdf) takes the form as The hazard rate function (hrf) of the left-truncated normal distribution at zero is expressed as below. In addition, we obtain the associated numeral characteristics such as the expectation and variance respectively. Figure 1 presents the pdfs and hrfs of the left-truncated normal distributions when the truncation point is zero with different µ and τ. It is observed that the pdf is unimodal and the hrf is monotone increasing.

Generalized Progressive Hybrid Censoring Scheme
As technology has developed by leaps and bounds in the past few years, products are more reliable so that we can not get enough lifetime data to estimate the unknown parameters under the constraints of time and cost. As it is unavoidable to lose some experimental units during an experiment, an increasing number of researchers turned their attention to censored data. The earliest censoring schemes proposed are type-I and type-II censoring. Furthermore, combining type-I with type-II censoring schemes, researchers developed the hybrid censoring scheme. The features of all these schemes do not enable experimenters to withdraw any experimental unit at any stage before the experiment is over. Ref. [9] introduced the progressive censoring scheme afterwards. However, this scheme takes a huge amount of hours spent in conducting the experiment. In face of such defect, Ref. [10] developed an effective method-the progressive hybrid censoring scheme.
A progressive hybrid censoring sample can be generated as follows. First of all, n identical units are put into a test. We denote X 1 , X 2 , · · · , X m as the ordered failure times and R = (R 1 , R 2 , · · · , R m ) as the censoring scheme which satisfies that Once the m-th failed unit happens or the threshold time T is reached, the test is stopped. That is, the stop time T * = min{X m , T}. When the first failure happens, we randomly remove R 1 units out of the test. On the occasion of the second failure, we withdraw R 2 survival units randomly from the test. Analogously, R i units are removed at random when the i-th failure occurs. Finally, either when the m-th failure has happened or the threshold time T has been reached, the rest of the survival units are taken out of the test together.
But in the progressive hybrid censoring scheme, we cannot get accurate estimates of unknown parameters since there might be only few failures before the threshold time T which is pre-fixed. For this reason, Ref. [11] introduced a generalized progressive hybrid censoring scheme (GPHCS). This censoring scheme can realize a compromise between time restriction and the number of failed observations by terminating the test at the random time T end = max{X k , T * }. It guarantees that at least k failures can be observed before the terminal time T end . Assume that a test starts with n identical units and the acceptable minimum number k of failures and the expected number m of failures observed are prefixed between zero and n, we choose the threshold time T and the censoring scheme that satisfies m ∑ i=1 R i + m = n ahead of the test. Similarly, we remove R 1 units randomly on the occasion of the first failure. On the arrival of the second failure, we randomly withdraw R 2 units from the rest of the experimental units. The procedure keeps repeating until the end time T end = max{X k , min{X m , T}} is reached.
Schematically, Figure 2 gives the process that how to generate the generalized progressive hybrid censored data under different conditions of the pre-fixed time.
The generalized progressive hybrid censoring scheme modifies the terminal time T end to attain sufficient failed observations within a reasonable experimental period and brings about more accurate estimations of unknown parameters. Since its higher efficiency of statistical inference, attention to generalized progressive hybrid censoring scheme has mounted. Ref. [11] employed the classical and Bayesian estimation techniques to estimate the entropy of Weibull distribution. Base on the method he introduced previously, Ref. [12] further derived the exact confidence intervals. Ref. [13] chose a competing risks model when data were sampled in the generalized progressive hybrid censoring scheme from an exponential distribution and derived the estimates through the maximum likelihood estimation approach and importance sampling method. On the basis of GPHCS, Ref. [14] investigated the two-parameter Rayleigh competing risk data adopting the maximum likelihood estimation and Gibbs sampling technique was employed to approximate the associated Bayes estimates.
The aim of our work is to obtain the classical and Bayesian estimations of the lefttruncated normal distribution at zero when data are derived from the generalized progressive hybrid censoring. To begin with, the Newton-Raphson (N-R) algorithm is proposed to compute the maximum likelihood estimates (MLEs) of unknown parameters of TN(µ, τ). Another iterative approach-expectation maximization (EM) algorithm is also introduced to calculate the estimates. Subsequently, the observed Fisher information matrix and percentile bootstrap (Boot-p) method are considered to obtain the confidence interval estimations. In the Bayesian framework, we employ Tierney and Kadane (T-K) approximation and importance sampling (IS) technique to estimate Bayesian estimators. The associated higher posterior density (HPD) intervals are developed as well. Statistical inference of the left-truncated normal distribution at zero with the generalized progressive hybrid censored samples has not yet been carried out previously in terms of what we know about it.
The rest of this paper is made up of the following sections. The maximum likelihood estimators of µ and τ are theoretically derived respectively via the N-R approach and EM method in Section 2. In Section 3, we obtain the asymptotic confidence intervals using the asymptotic distributions of the MLEs, the asymptotic distributions of log-transformed MLEs and the Boot-p method. In Section 4, Bayes estimates of all unknown quantities are achieved applying T-K approximation under different loss functions. Besides, we figure out the Bayesian estimates of parameters using the importance sampling procedure. Based on this approach, the corresponding HPD intervals are developed. Numerical simulations and an analysis of one authentic example are carried out in Section 5. Finally, we arrive at some conclusive remarks in Section 6.

Maximum Likelihood Estimation
Our interest in this section is to obtain the maximum likelihood estimates of µ and τ with generalized progressive hybrid censored data. Based on the pdf and cdf of the truncated normal distribution when the left truncation point is zero, the likelihood and log-likelihood functions of three cases are expressed as • Case I: • Case II: • Case III: Then the likelihood and log-likelihood functions for three kinds of cases can be combined in a general expression as (1) Next, take the first derivatives of (1) for µ and τ respectively and make them equal to zero. A set of score equations can be attained as follows.
∂l ∂µ τ , η T = T−µ √ τ and H 1 (µ, τ) = 0, H 2 (µ, τ) = 0 for Case I and Case II; for Case III. The maximum likelihood estimates of unknown parameters are the solutions to the above equations. Apparently, the expressions of µ and τ are involved in a nonlinear problem and the analytic solutions are not available. Therefore, we have to depend on some numerical methods such as the N-R method and EM algorithm to approximate the values of unknown parameters.

Newton-Raphson Algorithm
Since the first and second-order derivations of the log-likelihood function are available, the Newton-Raphson algorithm is appropriate to maximize the log-likelihood function.
The second-order derivations of (1) for the parameters are given by The estimates can be updated by The process repeats until |µ l+1 − µ l | < ε and |τ l+1 − τ l | < ε, where ε is pre-fixed as the tolerance limit.

Expectation Maximization Algorithm
Here EM algorithm discussed in [15], is employed to obtain the maximum likelihood estimates of µ and τ for the left-truncated normal distribution. It is an effective procedure to calculate the MLEs in the case of censored data. The expectation step (E-step) and maximization step (M-step) are two steps in the progress of the EM algorithm to calculate estimates for the given model. The former is to compute adequate information of censored data based on observed data, whereas the latter one is to re-estimate current parameters.
As mentioned earlier, we can only observe J failure units for three cases under consideration. Assume that X = (X 1 , X 2 , · · · , X J ) stands for the observed data which is subject to the left-truncated normal distribution at zero and Z = (Z 1 , Z 2 , · · · , Z J ) (Z T j , j = 1, 2, · · · , R T ) denotes censored data, where Z ij , i = 1, 2, · · · , J, j = 1, 2, · · · , R J is the j-th unit that was withdrawn at the failure time X i and Z T j , j = 1, 2,· · · , R T , is the j-th unit that was withdrawn at the end time T end in Case II. Thus, we denote the complete data as C = (X, Z), then the likelihood function of complete data under the GPHCS takes the form as Leaving out the constant term, the corresponding log-likelihood function is transformed into Following two steps of the EM algorithm implemented in [16], we can get the 'pseudolog-likelihood' function by utilizing the associated expected values to take place of the censored data. E-step: Under the complete data, the 'pseudo-log-likelihood' function is For i = 1, 2, · · · , J; j = 1, 2, · · · , R J , the conditional expectations mentioned above are deduced as By analogy, for j = 1, 2, · · · , R T , where (3) and (4) into (2), the 'pseudo log-likelihood' function becomes M-step: The major purpose of this step is to maximize the 'pseudo log-likelihood' function to calculate the next iterate. Let (5) take derivatives with regard to µ and τ respectively and equal them to zero, the corresponding score equations can be obtained as below. Suppose that (µ (l) , τ (l) ) is the l-th iteration estimate of (µ, τ), (µ (l+1) , τ (l+1) ) is derived afterwards.
Next upgrade µ (l) and τ (l) into µ (l+1) and τ (l+1) by soving the equations shown above. The E-step and M-step continue until the precision satisfies the tolerance limit that is fixed By that time, the convergence values are seen as the estimated values of µ and τ based on EM algorithm.

Confidence Interval Estimation
In this section, confidence interval estimates are provided for unknown parameters µ and τ using the MLE-based asymptotic confidence intervals (ACIs), the log-transformed MLE-based asymptotic confidence intervals (Log-CIs) and bootstrap confidence intervals (Boot-p CIs).

Asymptotic Confidence Intervals for Mles
Based on the asymptotic normal property of the MLEs, the asymptotic distribution are the MLEs of µ and τ, I −1 (µ, τ) stands for the inverse of the Fisher information matrix. Then the variancecovariance Fisher information matrix of (µ, τ) is able to obtain theoretically from the following of Fisher information matrix.
At times it's troublesome to figure out the exact Fisher information matrix. Therefore, I −1 obs (μ M ,τ M ) the inverse of the observed Fisher information matrix is employed to make an estimation of I −1 (µ, τ). Here we express the observed Fisher information matrix of unknown variables µ and τ as the following form.
The elements of the matrix (7) are calculated in Section 2. Subsequently, the asymptotic variance-covariance matrix is derived from Based on the matrix (8), the approximate variance of two parameters µ and τ can be derived, then we can obtain the 100(1 − ζ)% ACIs for these two parameters where Z 1−ζ/2 satisfies P(X ≤ Z 1−ζ/2 ) = 1 − ζ 2 when X follows the standard normal distribution.

Asymptotic Confidence Intervals for Log-Transformed Mles
Occasionally, the lower bound of ACI is less than zero. To conquer this setback, the logarithmic transformation and delta method is suggested to make sure that the lower bound of ACI is nonnegative.
Let α = (µ, τ) denotes the unknown parameter vector, in accordance to [17], the distribution of is approximately subject to the standard normal distribution, where α 1 = µ and α 2 = τ. Hence, a 100(1 − ζ)% logarithmic transformation confidence interval for α i is further constructed as

Percentile Bootstrap Approach
The methods of asymptotic confidence intervals introduced above are both originated from the laws of large numbers. They do not perform effectively when faced with a small sample size. We suggest the percentile bootstrap approach to overcome the drawback and construct Boot-p CIs for µ and τ as well. According to [18], the following steps can be implemented to generate the bootstrap samples to develop Boot-p CIs.
Step 1: Calculate the MLEs of two parameters µ and τ from the original generalized progressive hybrid censored sample.
Step 2: Utilize the same censoring scheme (T, n, m, k, R) andμ M andτ M to generate a generalized progressive hybrid censored bootstrap sample.
Step 3: Calculate the bootstrap estimators of µ and τ, denote as µ * and τ * , from the bootstrap sample of truncated normal distribution.
Step 4: Perform Step 2 and Step 3, N times to obtain a sequence of bootstrap estimators.

Bayes Estimation
Bayes point and interval estimations can be evaluated for unknown variables µ and τ belong to truncated normal distribution under the GPHCS in this section. All the Bayes estimations can be deduced theoretically under symmetric and asymmetric loss functions using Tierney and Kadane approximation and the importance sampling technique.

Prior and Posterior Distribution
Since a conjugate prior distribution for µ and τ does not exist, we make the same assumption as in [6] that µ and τ have a conditional bivariate prior distribution as the following form.
π(µ, τ) = π 1 (τ)π 2 (µ|τ) where The prior distribution of τ is the inverse Gamma distribution IG(c, d/2). While under the circumstance that τ is known, µ follows the truncated normal distribution TN(a, τ/b) when the truncation point is zero. Here a, b, c, and d are treated as the hyper-parameters, whose domain range from zero to positive infinity.
Therefore the joint prior distribution is written as Using (1) and (9), the posterior distribution of the left-truncated normal distribution TN(µ,τ) at zero is where P is the normalizing constant satisfying that To start with, ℵ(µ, τ) denotes a function of µ and τ. So the expectation of ℵ(µ, τ) giveñ x is shown as

Loss Functions
In the Bayesian framework, the Bayes estimate of a function ℵ(µ, τ) can be derived based on a prescribed loss function. We discuss three kinds of loss functions, namely squared error, general entropy, and linex loss functions.

• Squared error loss function
Squared error loss (SEL) function is the most universally applicable loss function to obtain Bayes estimators of unknown parameters. We can express its definition as In the following cases, we denoteˆ as an estimator of . Under this circumstance, the corresponding Bayes estimatorˆ s of can be derived from Then we can obtain the Bayes estimate‫(א‬µ, τ) s of ℵ(µ, τ) under SEL function • General entropy loss function The expression of general entropy loss (GEL) function is Under this circumstance, the Bayes estimatorˆ e of can be derived from Finally, the Bayes estimate‫(א‬µ, τ) e of ℵ(µ, τ) results to be the following form based on GEL function.‫א‬ • Linex loss function The linex loss (LL) function is defined in the form as Under this circumstance, the corresponding Bayes estimatorˆ l of can be derived from Next, the Bayes estimate‫(א‬µ, τ) l of ℵ(µ, τ) under LL function is computed bŷ Obviously, the Bayes estimatons of unknown parameters µ and τ in three kinds of loss functions cannot be expressed in closed forms. For this reason, we derive the Bayes estimations by the means of Tierney and Kadane method, as well as the importance sampling procedure.

Tierney and Kadane Method
It is hard to reduce the Bayes estimations into closed forms in the shape of the ratio of two integrals. Ref. [19] introduced an alternative method to approximate such ratio of integrals to derive the Bayes estimates of unknown parameters. It is just a simple Taylor approximation around the maximum a posteriori estimate up to second order (also known as saddle-point approximation, see [20]). We regard ℵ(µ, τ) as a function of µ and τ. Then the approximation of Tierney and Kadane approach is summarized as follows.
Then we have Λ * µs , which is given by Using the equations above, the Bayes estimator of µ comes to bê Going through a similar process, under the same loss function the Bayesian estimator of τ is achieved.
As for the GEL function, we consider ℵ(µ, τ) = µ −q for µ and then the corresponding function λ * µe (µ, τ) is expressed as Subsequently, the following equations are solved to obtain (μ λ * ,τ λ * ) Then, we have Λ * µe , which is given by After that, the Bayes estimator of µ iŝ Likewise, the Bayesian estimator of τ is realized based on this loss function.
When it comes to the LL function, ℵ(µ, τ) = e µ for µ is under consideration and then the corresponding function λ * µl (µ, τ) is given by Later,μ λ * andτ λ * are derived by solving the following equations Then, we have Λ * µl , which is given by The Bayes estimator of µ turns intô Similarly, under the LL function, the Bayesian estimator of τ can be attained.

Importance Sampling Procedure
Since the T-K method fails to develop the interval estimators, an importance sampling procedure is proposed to construct Bayesian credible intervals in this part. The importance sampling procedure is an effective approach to attain Bayes estimates for TN(µ, τ). In the meanwhile, the HPD intervals can be constructed through this method under generalized progressive hybrid censored data. Recall that the posterior distribution of µ and τ for µ > 0, τ > 0 is the following form After some calculations, (18) is reduced to where Through the importance sampling procedure, we attain the Bayes estimates of µ and τ. The importance sampling procedure method is briefly described as follows: Step 1: Generate Step 2: Sample µ 1 randomly from TN µ|τ Step 3: Perform Step 1 and Step 2, k times to obtain (µ 1 , τ 1 ), (µ 2 , τ 2 ), · · · , (µ k , τ k ) Step 4: Now the Bayes estimation of ℵ(µ, τ) can be derived as follows. [21] is applied to derive the 100(1 − ζ)% Bayesian credible intervals for the given truncated normal distribution. Assume that 0 < ζ < 1 and ℵ ζ satisfies P(ℵ(µ, τ) ≤ ℵ ζ ) = ζ. For a prefixed ζ, we attain the estimation of ℵ ζ and use it to establish the HPD intervals for ℵ(µ, τ).

Simulation
In attempts to analyze the performance of different methods introduced in previous sections, we utilize R software to conduct the simulation experiments. In light of the algorithm proposed in [22], a progressive type-II censored sample under any continuous distribution can be generated. By adopting this method, we can generate a generalized progressive hybrid censoring sample that is subject to the truncated normal distribution. Please see Algorithm 1.
Algorithm 1 Generate a generalized progressive hybrid censoring sample from truncated normal distribution.
3: For pre-fixed censoring scheme, we compute j=m−i+1 V j , for i = 1, 2, · · · , m. Then U 1 , U 2 , · · · , U m is a progressive Type-II censored sample with size m from a uniform distribution from zero to one. 5: For known values of parameters µ and τ, the desired progressive Type-II censored sample out of the truncated normal distribution TN(µ, τ), is ), for i = 1, 2, · · · , m, where Φ −1 is the inverse cumulative distribution function of truncated normal distribution. 6: If T < X k < X m , the generalized progressive hybrid censored sample is X = (X 1 , X 2 , · · · , X k ). 7: If X k < T < X m , J can be obtained which satisfies X J < T < X J+1 , the generalized progressive hybrid censored sample is (X 1 , X 2 , · · · , X J ). 8: If X k < X m < T, the generalized progressive hybrid censoring sample is (X 1 , X 2 , · · · , X m ).
Without loss of generality, we take µ = 0.5, τ = 1 for diverse values of T, n, m and k to generate a generalized progressive hybrid censored sample from the left-truncated normal distribution when truncation point is zero. Meanwhile, two kinds of censoring schemes are considered, which are: Scheme I: R m = n − m, R i = 0, i = 1, · · · , m − 1. Scheme II: R 1 = n − m, R i = 0, i = 2, · · · , m. In point estimation, we compute the MLEs and Bayes estimates. For MLEs, the EM estimation method is employed to calculate the MLEs of µ and τ, where the nleqslv package with Broyden method in R software is applied to solve the nonlinear equations in the maximization step. Besides, the MLEs are derived by the N-R approach for comparative purposes. This method can be achieved by the function 'optim' in R software. Here the true values of parameters are considered as the initial values for the N-R method and the EM algorithm. The value of the tolerance limit ε is 0.0001 for all simulations. Tables A1 and A2, shown in the Appendix A, compare the results of the EM and N-R methods in terms of average absolute biases (ABs), the associated mean squared errors (MSEs) and average number of iterations (AIs) until convergence. They are where N stands for the simulation times, is the true value,ˆ i denotes the i-th estimate of .
Besides, the Bayesian estimates against diverse loss functions including SEL, LL, and GEL functions are obtained by the means of the T-K method and importance sampling procedure. In order to appropriate true values better, the values of hyper-parameters a = 0.05, b = 1, c = 8, d = 0.3 in prior distributions might be a wise choice based on numerous experimental simulations. The desired estimates are obtained with = 0.35, 0.45 for the linex loss function. The general entropy loss function is considered with q = 0.8 and q = 1.1 to calculate the relative estimates. The ABs and MSEs are derived to evaluate the accuracy of the estimations. All the estimates are derived by replicating 1000 times in each case. The simulation results, Tables A3 and A4, are shown in the Appendix A.
From these tables, some conclusions can be drawn: (1) There is no significant difference between the EM algorithm and N-R algorithm in terms of ABs and MSEs. (2) The N-R method takes fewer steps until convergence than the EM. In interval estimation, we derive 95% confidence intervals (CIs) of parameters using the MLEs, log-transformed MLEs, Boot-p approach and 95% HPD intervals for Scheme I and Scheme II. We compare the coverage probabilities (CPs) and average lengths (ALs) of these interval estimates. The simulation results are reported in Tables A5 and A6, shown in the Appendix A.
From these table, some conclusions are summarized as follows.
(1) For confidence intervals, the Log-CIs perform much better than the ACIs in the sense of having higher coverage probabilities. (2) When the sample size n gets larger, the CPs of all interval estimates tend to decrease.
(3) Boot-p CIs show higher coverage probabilities and narrower interval lengths than the ACIs and Log-CIs when the sample size is small. (4) With n, m, and k keeping invariant, the CPs and ALs of all estimates fluctuate slightly and the tendency is not significant with an increase of T. (5) The HPD intervals are slightly better than other interval estimates based on the CPs. (6) Scheme II usually performs better than Scheme I with regard to the CPs.

Real Data Analysis
One authentic example has been considered to demonstrate the performance of the proposed estimation approaches in this section. The data set is obtained from [23] (Lawless 1982, page 288), which shows the number of million revolutions before 23 ball bearings had failed. The data are given in Table 1 and shown as a histogram in Figure 3.  Prior to analyzing the example, one question that may arise is whether the data set comes from the truncated normal distribution. In order to validate the hypothesis, we fit the truncated normal distribution to the data set, competing with folded normal distribution (FN) and half-normal distribution (HN). The associated probability density functions of FN and HN for x > 0 are respectively written as follows.
In order to assess the goodness of fit of these given models, we take advantage of − log(L), and Kolmogorov-Smirnov (K-S) statistic, defined by D n = sup where L is the maximum of the likelihood function, n is the number of observations, F n (x) is the cumulative distribution function of the sample set, and F(x) is the assumed cumulative distribution function. These estimated values are shown in Table 2. Under the complete data, the classical estimates of these given distributions are obtained additionally. In view of the fact that the truncated normal distribution has the lowest values for K-S and − log(L) statistics, it is reasonable to say we have no proof leading to rejection of the null hypothesis.
Then the following generalized progressive hybrid censored samples can be generated by setting m = 18, and R 1 = 5, R 2 = · · · = R 18 = 0. In this example, we take T = 1, k = 16 for Case I, T = 1, k = 12 for Case II, as well as T = 1.8, k = 12 for Case III.  Table 3 presents the exact values of point estimations of µ and τ under the generalized progressive hybrid censored sample. The N-R algorithm and the EM algorithm are employed to derive the MLEs of parameters of the truncated normal distribution. We take the estimates under the complete data for initializing these two methods. For Bayes estimations, non-informative prior distributions, a = b = c = d = 0, are applied to compute the exact values under symmetric and asymmetric loss functions based on T-K approximation and importance sampling procedure. We choose q = 0.8 and q = 1.1 for general entropy loss function, and fix = 0.35 and = 0.45 for the linex loss function. Table 4 shows the results of the 95% CIs and HPD intervals of unknown parameters µ and τ. As can be seen from the tables, one can notice that the results of MLEs, Bayes estimates under SEL function and importance sampling procedure are quite close to each other.

Conclusive Remarks
In the paper, we address the problem of making statistical inference for the truncated normal distribution when the truncation point is zero with the generalized progressive hybrid censored data. The N-R and EM algorithms are applied to calculate the MLEs of unknown parameters. The associated estimates are computed through numerous simulations by taking the true unknown parameters as an initial guess. Further, making use of the asymptotic normality property of the maximum likelihood estimator, we construct the 95% ACIs as well. As for the small sample size, Boot-p method is suggested to develop the Boot-p intervals. Considering different kinds of loss functions, we derive Bayes estimates through T-K approximation and the importance sampling procedure. The latter one is also employed to construct the Bayesian credible intervals. Extensive simulations are conducted to examine how the proposed approaches work. It is notable that if proper prior information is available on unknown parameters then corresponding Bayes estimates are superior to respective MLEs. Among intervals, Bayesian credible intervals are slightly better than other invertals. One authentic example is studied for illustrative purposes. The considered model is found to be suitable for this case and the proposed approaches perform well.
While we consider the statistical inference of the truncated normal distribution at zero in this paper, the proposed approaches can be extended to the doubly truncated normal distribution and the singly truncated normal distribution at any fixed truncation point. The statistical inference on the doubly truncated distribution with unknown truncation points in [24] provides a good starting point for discussion and further research. Extensive work needs to be carried out in this direction.

Conflicts of Interest:
The authors declare no conflict of interest.