Statistical analysis of progressively ﬁrst-failure-censored data via beta-binomial removals

: Progressive ﬁrst-failure censoring has been widely-used in practice when the experimenter desires to remove some groups of test units before the ﬁrst-failure is observed in all groups. Practically, some test groups may haphazardly quit the experiment at each progressive stage, which cannot be determined in advance. As a result, in this article, we propose a progressively ﬁrst-failure censored sampling with random removals, which allows the removal of the surviving group(s) during the execution of the life test with uncertain probability, called the beta-binomial probability law. Generalized extreme value lifetime model has been widely-used to analyze a variety of extreme value data, including ﬂood ﬂows, wind speeds, radioactive emissions, and others. So, when the sample observations are gathered using the suggested censoring plan, the Bayes and maximum likelihood approaches are used to estimate the generalized extreme value distribution parameters. Furthermore, Bayes estimates are produced under balanced symmetric and asymmetric loss functions. A hybrid Gibbs within the Metropolis-Hastings method is suggested to gather samples from the joint posterior distribution. The highest posterior density intervals are also provided. To further understand how the suggested inferential approaches actually work in the long run, extensive Monte Carlo simulation experiments are carried out. Two applications of real-world datasets from clinical trials are examined to show the applicability and feasibility of the suggested methodology. The numerical results showed that the proposed sampling mechanism is more ﬂexible to operate a classical (or Bayesian) inferential approach to estimate any lifetime parameter.


Introduction
In reliability studies, censoring frequently occurs, allowing the experiment to be stopped before all of the units have failed. These approaches produce observations known as censored samples. Firstfailure censoring refers to a life-test in which the experimenter may choose to divide the units into various sets, each serving as an assembly of test units, and then run the test for all groups concurrently until the first failure in each set is seen. One can test the sample units with n = k × s, where s is the number of groups, each of which has the same k size of items. It is useful when the survival time is very large and test facilities are limited, but test material is substantially less expensive, see Balasooriya [1].
The main drawback of first-failure censoring is that it prevents units from being removed anywhere at time other than the final termination time. Thus, to process this drawback, Wu and Kuş [2] suggested progressively first-failure censored sampling (PFFC), a life-testing strategy that combines the firstfailure and progressive Type-II censoring (PCT2) plans. Thus, the PFFC allows us to exclude some sets of units from the life-test before seeing the first-failures in all sets. They also investigated inferences for the Weibull parameters and demonstrated that this censoring provides shorter test durations than the PCT2. Upon PFFC data, several works have been created in the literature, for example, see Ashour et al. [3], Yousef and Almetwally [4], Nassar et al. [5], Ramadan et al. [6], and references cited therein.
Within the past decade, the PFFC strategy has attracted considerable interest and has become highly common in reliability research. Nevertheless, in certain real-life scenarios, such as clinical trials, the number of patients dropping out of the trial at each step is random, and the specific design of the removals cannot be predetermined. As a result, the number of patients that drop out of the experiment at each stage will follow a discrete distribution. Mostly, researchers used discrete uniform or binomial probability distributions. Thus, Huang and Wu [7] studied the estimation issue for progressively firstfailure censored data with a discrete uniform distribution of units removed at each step. Since the discrete uniform removal design may not be appropriate since it presupposes that each removal event occurs with the same probability regardless of the number of objects removed, consequently, Ashour et al. [8] proposed a progressively first-failure censoring with binomial removals.
In contrast, it appears implausible that a binomial distribution would assume that the probability of removal for each patient is constant throughout each stage. The likelihood of removal will therefore vary from patient to patient in practice and is still unknown to the experimenter. The removal probability p should be regarded as random and as following a probability distribution. To account for this uncertainty, Singh et al. [9] hypothesized that the distribution of the number of removals would follow a binomial distribution and that the probability of removals (p) would be a random variable with a beta distribution. They called this new censoring mechanism as a progressive Type-II censoring with beta-binomial removals. Kaushik et al. [10] (Sangal and Sinha [11]), to extend the PFFC from pre-fixed removals, suggested progressive Type-I interval (progressive Type-I hybrid) censoring with beta-binomial removals. Over the past decade, several works have been developed on the basis of the progressive censorship framework with random removal, see, for example, Ding et al. [12], Ding and Tse [13], Kaushik et al. [14], Chacko and Mohan [15], and Elshahhat and Nassar [16], among others.
As far as we are aware, there hasn't been any research that focuses on the analysis of PFFC when the number of objects removed at each stage follows a beta-binomial distribution. The major goal of this work is to extend the PFFC plan from pre-fixed removals to beta-binomial removals. To define methodology, we propose a new censoring scheme called progressive first-failure censored sampling with beta-binomial removal (PFFC-BBR).
Extreme value theory, today, has become one of the most important statistical issues in various applied sciences. The generalized extreme value (GEV) distribution is often used to model the smallest or largest value from a group or block of independent, identically distributed random values representing measurements or observations. It is also useful in situations where data indicate exponentially increasing failure rates. As a result, it has been used to analyze a variety of extreme value data, including flood flows, wind speeds, and radioactive emissions; for further information, see Lai [17]. The distribution may be obtained from the beta log-Weibull distribution by Cordeiro et al. [18] as a special case. Suppose that a lifetime random variable X follows the three-parameter GEV(α, λ, µ). However, to illustrate our theory, we consider a PFFC-BBR sample to follow a GEV distribution. Hence, the respective probability density function (PDF) and cumulative distribution function (CDF) of X are given by f (x) = αλ exp (λ (x − µ)) exp −α exp (λ (x − µ)) , x ∈ R (1.1) and where α and λ are shape and scale parameters, respectively and µ is a location parameter.
Putting α = 1 in (1.1), Type-I extreme value distribution discussed in Balakrishnan et al. [19] is obtained. Note that Z = exp (λ (X − µ)) follows the exponential distribution with scale parameter α. From (1.1) and (1.2), the hazard rate function (HRF), h(t), at a distinct mission time t, is given by h(t) = αλ exp (λ (t − µ)). In the reliability context, Pandey et al. [20] discussed the maximum likelihood estimators (MLEs) and Bayes estimators (BEs) of the GEV distribution in the presence of PCT2 data; and Kumari and Pandey [21] discussed the Bayes estimation procedures for estimating the GEV parameters based on Type-II censoring. Without loss of generality, we take µ = 0 and develop inferential procedures for the shape α and scale λ parameters.
Briefly, we can provide the main objectives of the present work as follows: • When the lifetime points are gathered using PFFC-BBR, infer both point and interval estimations of the unknown parameters of the GEV distribution using the maximum likelihood and Bayesian inferential procedures. • The BEs are developed under various balanced symmetric and asymmetric loss functions, including balanced squared-error loss (BSEL), balanced linear-exponential loss (BLL), and balanced general-entropy loss (BGEL), which are used as an interesting decision-making tool. This is presuming that the parameters α and λ have independent gamma and Hartigan priors, respectively. • Two different confidence interval-estimation procedures are also constructed, namely: approximate confidence intervals (ACIs) and highest posterior density (HPD) intervals. • Since the likelihood function is expressed in complex-form, the BEs, along with their HPD interval estimates, are developed via Monte-Carlo Markov-chain (MCMC) techniques, namely: Metropolis-Hastings (M-H) algorithm and Gibbs sampler. • Numerically, the acquired MLEs are evaluated via 'maxLik' package by Henningsen and Toomet [22], which implements the Newton-Raphson method. Further, the acquired BEs are evaluated via 'CODA' package by Plummer et al. [23], which creates the MCMC variates.
• Since the performances of different estimates cannot be compared analytically, we perform Monte Carlo simulations to examine and compare the performances of the various estimates in terms of their simulated mean squared-errors, relative absolute biases and average confidence lengths. • Analyzing two real-life data sets from clinical trials, representing mortality rates from coronavirus disease 2019 (COVID-19) and survival times for ovarian cancer patients after surgical treatment, the proposed methodology is illustrated. • Lastly, several extensions from the proposed censoring are demonstrated and may be obtained as special cases.
The remaining sections are arranged as follows: We present a formulation of PFFC-BBR in Section 2. In Sections 3 and 4, respectively, the classical and Bayesian approaches to model parameter estimation are developed. Section 5 presents the outcomes of the simulations. Section 6 examines two practical applications. Lastly, in Section 7, we draw a conclusion to the research.

PFFC-BBR model
Progressive censoring mechanisms with random removals happen naturally in certain real-world situations. Consider a clinical test in which a doctor examines various cancer patients, but after the first patient dies, some of the patients may leave the hospital out of fear and/or a lack of confidence in the doctor. Following the second death, a couple more leave, and so on. Ultimately, the doctor stops taking observations once a certain number of deaths (say, m) have been documented. As a result, the number of patients who leave a hospital at each stage is random, and the exact pattern of removals cannot be determined. Thus, one should consider that the number of removals is random and follows the binomial distribution with uncertain probability following a certain probability distribution instead of a fixed probability. Therefore, Singh et al. [9], making use of the beta-binomial distribution for random removals, introduced a progressive Type-II censoring scheme with beta-binomial removals (PCT2-BBR). They also investigated different inferences for the generalized Lindley distributed lifetimes. Several inferences for different lifetime models based on the PCT2-BBR have been carried out, e.g., by Usta et al. [24], Kaushik et al. [10] and Vishwakarma et al. [25], among others.
Suppose that s independent groups, each having k items, are put on a life-testing experiment at time zero. Let m be a pre-fixed number of failures, and R = (R 1 , R 2 , . . . , R m ) denote the random removals of the groups. At the time of the first observed failure (X (1) ), some R 1 groups and the group in which the first failure occurred are removed from the experiment. Following the second observed failure (X (2) ), some R 2 groups and the group in which the second failure is observed are removed from the remaining live s−R 1 −1 groups, and so on. This procedure continues until the mth failure has occurred and removes all remaining live groups Then, X (1) , X (2) , . . . , X (m) represent the independent lifetimes of the PFFC order statistics with a pre-determined number of removals, say (R 1 = r 1 , R 2 = r 2 , . . . , R m = r m ). If the failure times of elements originally placed in the life test come from a continuous population with PDF, f (·), and CDF, F(·), then the likelihood function of the observed data x = (x (1) , x (2) , . . . , x (m) ) can be expressed as where Suppose the probability of a removal r i of group(s) at the i − th failure i = 1, 2, . . . , m − 1, follows a binomial distribution with parameters s − m − i−1 j=1 r j and p as where 0 r 1 s − m and 0 r i s − m − i−1 j=1 r j for i = 2, 3, . . . , m − 1. According to Singh et al. [9], we assume that the probability of removals p is not fixed during the whole experiment but a random variable that follows the beta distribution with parameters ξ and ζ having the following PDF where B (ξ, ζ) = Γ (ξ) Γ (ζ)/(Γ (ξ + ζ)) is beta function. Thus, from (2.2) and (2.3), the unconditional distribution of R i s can be derived as After simplification, we get The probability mass function given in (2.4) is known as the beta-binomial distribution, and it is denoted by BB(n , ξ, ζ), where n denotes the number of trials. Thus, the joint probability distribution of beta-binomial removals is given by Substituting (2.4) in (2.5), the joint probability of R 1 = r 1 , R 2 = r 2 , . . . , R m = r m is given by Furthermore, we assume that R i = r i is independent of X (i) for all i = 1, 2, . . . , m. Hence, the full likelihood function of PFFC-BBR takes the following form where L 1 (θ|R, x) and L 2 (R = r|ξ, ζ) are defined in (2.1) and (2.6), respectively. Also, it is noted that L 1 (·) is a function of the unknown parameter θ of the parent distribution only, whereas L 2 (·) is a function of the beta-binomial parameters ξ and ζ only. Therefore, L 1 (·) and L 2 (·) can be maximized (independently) directly by obtaining the MLEsθ,ξ andζ of θ, ξ and ζ respectively. It should be noted here that the PCT2-BBR, which was proposed by Singh et al. [9], can be obtained as a special case from (2.7) by setting k = 1. The sampling procedure for a life test based on the PFFC-BBR is reported in Table 1.

Likelihood inference
This section discusses the procedures for obtaining the MLEs and ACIs of α, λ, ξ, and ζ from the proposed plan.

Maximum likelihood estimators
Consider placing s × k independent units from a population on a PFFC-BBR life test with the associated lifetimes having an identical distribution and the PDF and CDF specified in (1.1) and (1.2), respectively. When (1.1) and (1.2) are substituted into (2.1), the likelihood function (2.1) can be expressed up to proportional as (3.1) The corresponding log-likelihood function 1 (·) ∝ log L 1 (·) of (3.1) becomes The MLEsα andλ for the parameters α and λ, respectively, are obtained by solving the log-likelihood equations, and Similarly, the MLEsξ andζ of ξ and ζ, respectively, can be found by maximizing (2.6) directly. Hence, the natural logarithm, 2 (·) ∝ log L 2 (·) can be written as where ζ * = ζ + s − m − i j=1 r j . From (3.7), the MLEsξ andζ can be obtained as the simultaneous solutions of the following two normal non-linear equations, respectively, as and are the gamma and digamma functions, respectively, see Lawless [26]. From the expressions as in (3.5), (3.6), (3.8), and (3.9), the likelihood equations with respect to the unknown parameters α, λ, ξ, and ζ, respectively, do not yield a closedform solution. The MLEs outlined above can therefore be numerically assessed using any iterative approach, such as the Newton-Raphson method.

Bayesian inference
The Bayes approach to deriving point and interval estimates of α, λ, ξ, and ζ under BSEL, BLL, and BGEL functions will be discussed in this section.

Balanced loss functions
A loss function is essential in statistical decision making since it focuses on estimating precision. Zellner [27] proposed a more generalized loss function known as the balanced loss function. The balanced loss (BL) function achieves a compromise between classical and Bayesian techniques and produces an estimate that is a linear mixture of likelihood and Bayesian estimates. Estimating the unknown parameter θ on the basis of a random vector X = (X 1 , X 2 , . . . , X n ) is defined as Another class of balanced type loss functions, proposed by Jozani et al. [28], is defined as The expression in (4.1) involves a loss function, denoted by l(·), which is used to estimate the parameter θ using the estimatorθ. Additionally, the parameter θ * is selected beforehand as a 'target' estimator for θ. The topic has been the subject of consideration by numerous authors in the recent past, see Barot and Patel [29], Maiti and Kayal [30], Ahmadi and Doostparast [31], and the citation given therein.
However, the BSEL l BS (·), BLL l BL (·) and BGEL l BG (·) functions are defined as Using (4.2)-(4.4), the BEs of θ against the BSEL, BLL and BGEL functions are, respectively, given byθ whereθ is the MLE of θ. In particular, if setting ω = 0 (or ω = 1), the BE from BL-based function reduced to the conventional BE from an unbalanced loss-based (or MLE).

Posterior functions
In accordance with Kumari and Pandey [21], we made the assumption that α and λ have conjugate gamma and Hartigan's non-informative priors, respectively. Thus, the respective prior distributions α and λ are given by where the hyper-parameters a, b and c are assumed to be non-negative and known. Here, the gamma prior (4.8) is chosen to reflect prior knowledge about α. If putting c = 1, Eq (4.9) reduced to Jeffrey's prior. Also, Hartigan's [32] asymmetrically invariant prior, which is a popular non-informative prior among data analysts, can be obtained by putting c = 3 in (4.9). Consequently, from (4.8) and (4.9), joint prior PDF of α and λ is given by Combining (3.1) with (4.10) in continuous Bayes' theorem, the joint posterior PDF of α and λ is Since, we do not have priori information about ξ and ζ, it is better to consider the non-informative prior for the Bayesian analysis. Thus, the joint independent non-informative prior of ξ and ζ is given by π (ξ, ζ) = (ξζ) −1 for ξ, ζ > 0. Hence, the joint posterior PDF of ξ and ζ becomes where the normalizing constant, C * 2 , of (4.12) is given by Since the likelihood functions (3.1) and (2.6) are obtained in nonlinear forms, for this reason, the Gibbs sampler and M-H algorithm can be effectively used to approximate the Bayes (point and interval) estimates. First, the conditional distributions of α, λ, ξ, and ζ must be obtained as and respectively. It is evident from Eq (4.13) that the generation of samples for α can be accomplished with ease by employing any gamma density that has a shape parameter of (m + a) and a scale parameter of b * (λ). However, the conditional distributions presented in Eqs (4.14)-(4.16) cannot be simplified to conform to any standard distribution.

Monte Carlo simulations
It is necessary for us to simulate progressively first-failure censored samples with beta-binomial removals from the GEV distribution so that we may evaluate the efficacy of the proposed estimators that were obtained in the preceding sections. In this section, we first present a method for simulating random samples from the PFFC-BBR and then analyze how well various estimators perform with those simulated samples.

Simulation design
To obtain a PFFC-BBR sample, we propose the subsequent algorithm: Step 1: Provide numerical values of the following parameters: k, s, m, α, λ, ξ and ζ.
Step 4: Step 5: Given R = r, generate a PCT2-BBR as Step 6: Set Hence, x (i) , i = 1, 2, . . . , m is the required FFCS-BBR sample of size m from the GEV distribution. Now, using (α, λ, ξ, ζ) = (0.1, 1, 2, 2), we generate 1,000 PFFC-BBR samples from the GEV distribution for different combinations of k, s and m, such as: s = 20(small), 50(moderate), and 80(large) for each group size k(= 1, 3). The test is terminated when the number of failed subjects achieves (or exceeds) a specified value m, where the failure proportion is m s = 30, 60 and 90%. Using the hybrid strategy described in Subsection 4.2, the BEs are developed under BSEL, BLL (for h = ∓0.5) and BGEL (for q = ∓0.5) each with three weight values as ω(= 0, 0.3, 0.8). The hyperparameter value of α is taken as (a, b) = (0.1, 1). A total of 10,000 MCMC samples were generated, with the initial 2,000 iterations being discarded as a burn-in period. It should be mentioned here that the Bayesian MCMC analysis is the most computationally expensive, followed by the frequentist analysis.
However, the average estimates (AEs) with their mean squared-errors (MSEs), relative absolute biases (RABs), and average confidence lengths (ACLs) of the acquired estimators of α are calculated using the following formulae, respectively, as whereα is the estimate of α at ith sample, L(·) and U(·) denote the lower and upper interval bounds. In a similar fashion, the AEs, MSEs, RABs and ACLs of λ can be easily calculated.
All evaluations were performed using R software with two recommended statistical packages, namely: 'CODA' and 'maxLik' packages by Plummer et al. [23] and Henningsen and Toomet [22], respectively. The simulation results of α and λ are reported in Tables 2-6. In Tables 2-5, the AEs are reported in the first row, while their (MSEs,RABs) are reported in the second row.

Results and discussions
From Tables 2-6, some comments can be made as follows: • In general, the proposed MLEs and BEs of the unknown parameters of GEV distribution are very good in the sense of their MSEs, RABs, and ACLs. • As s(or m/s) increases, the proposed estimates become even better in terms of their MSEs and RABs, as expected. • The MSEs and RABs associated with α and λ typically grow as k increases, while those related to λ and α typically decrease. • Since prior information is included, frequentist estimates are outperformed by Bayesian estimates based on gamma conjugate priors and BL functions. • Regarding the asymmetric BL functions, it can be seen that the BEs provide better results than those obtained based on the symmetric BL functions. • To assess the effect of the BL functions, it is clear that the BEs under the BLL and BGEL functions of α and λ are overestimates (for h, q < 0) and underestimates (for h, q > 0). Working with the asymmetric BL functions has some advantageous characteristics, one of which is this. • Among all estimates, the BEs using the BGEL function become even better in most cases than other competing loss functions.
• In particular, when k = 1, the MSEs and RABs increase for different BEs for α and λ, for all ω > 0. • When k = 3, the MSEs and RABs associated with α decrease while those associated with λ increase, for all ω > 0. • When ω close to one, the MSEs and RABs corresponding to the Bayesian estimates of α and λ are almost equal to the corresponding MLEs. • When ω = 0, the Bayesian estimates are better than others in terms of the smallest MSEs and RABs. • The MSE and RAB values (with k = 3) are very similar to those for PCT2-BBR (with k = 1).
• As we would expect, the ACLs of ACI/HPD intervals narrowed down as s (or m/s increases).
• As k increases, the ACLs associated with α increase while those associated with λ decrease.
• As k increases, the ACLs of HPD intervals narrow down for α and λ.
• It should be mentioned that the Bayesian analysis is the most computationally expensive, followed by the classical analysis. • In conclusion, it is advised to use the Gibbs inside the M-H algorithm for Bayesian estimation of the unknown parameters of the GEV distribution.

Clinical applications
This section aims to demonstrate the adaptability and flexibility of the proposed methodologies to actual phenomena. To achieve this, two real applications from clinical trials are presented.

COVID-19 data analysis
This application provides analysis of the mortality rates of COVID-19 in the United Kingdom for 70 consecutive days from 1 January to 11 March 2021 [https://coronavirus.data.gov.uk/], see Table 7. To check the validity of the proposed model, the Kolmogorov-Smirnov (K-S) statistic and its P-value are obtained. First, using complete COVID-19 data, the MLEs with their standard errors (SEs) of α and λ are 0.0767 (0.0236) and 1.9176 (0.1809), respectively, and the K-S (P-value) is 0.117 (0.293). This result indicates that the GEV distribution fits the COVID-19 data.
To evaluate the existence and uniqueness ofα andλ, the contour plot of the log-likelihood function (3.2) using the complete COVID-19 data is plotted in Figure 1. It shows, from the maximum point x in the innermost contour, that the MLEsα 0.077 andλ 1.918 exist and are also unique. Therefore, we suggest taking these estimates as initial guesses in order to run any additional computational iterations. Table 7. Mortality rate of seventy COVID-19 patients in UK.
Moreover, some important vital statistics, namely: mean, median, mode, standard deviation (SD) and skewness (Sk.) for the MCMC variates of α, λ, ξ and ζ after burn-in; are also computed, see Table 11. To appreciate the convergence of MCMC outputs, the trace and density plots of α, λ, ξ and ζ are plotted with their sample means (horizontal dashed lines (-)) and 95% HPD intervals (horizontal dashed lines (---)), see Figure 2. It turns out that the proposed MCMC algorithm converges well and shows that the size of burn-in samples is appropriate to disregard the effect of the initial guesses. Using the Gaussian kernel, the approximate marginal densities (where the sample mean is represented with a horizontal dashed line (-)) of α, λ, ξ, and ζ with their histograms are also plotted in Figure 2. It evident that the generated posterior samples of all unknown parameters are fairly symmetric.    . Trace (right) and Density (left) plots for MCMC draws of α, λ, ξ and ζ using COVID-19 data.
There are two main reasons to consider this data. First, the data show an increasing failure rate that matches the GEV distribution. One may also trace the shape of the HRF using the total time on test (TTT) transform plot. Figure 3 indicates that the TTT diagram is concave down for the OC data, and this fact implies that the HRF is an increasing function of time. Second, we tested the GEV distribution fit using the K-S statistic, which also suggests that the GEV distribution fits well with the OC data. Here, the MLEs areα = 0.0994 andλ = 0.0029. The K-S(P-value) from the OC data is 0.19(0.22). Therefore, the GEV distribution may be a reasonable choice to model this OC data. Using the complete OC data, the contour plot of (3.2) is also plotted and displayed in Figure 3. It supports the same numerical findings, such that the MLEs of α and λ exist and are also unique. Further, we suggest takingα 0.1 andλ 0.003 as initial guesses to start any other numerical calculations.  From the real OC data set, for fixed ξ = ζ = 5 and different choices of m, three artificial samples of PFFC-BBR are generated; see Table 12. The BEs using non-informative gamma priors of α, λ, ξ, and ζ are obtained by running the chain of MCMC 6,000 times and discarding the first 1,000 values. The initial MCMC values of α, λ, ξ, and ζ were taken to be their MLEs. Taking ω = 0.5, the shape parameters h and q of BLL and BGEL functions, respectively, are taken as h = q = (−2, 0.02, 2).   Table 12, the maximum likelihood and Bayes estimates of α, λ, ξ, and ζ with associated SEs are computed and presented in Table 13. Also, 95% two-sided ACI/HPD intervals of α, λ, ξ, and ζ along with their lengths, are computed and listed in Table 14. Some vital statistics for MCMC outputs of the unknown quantities are computed and provided in Table 15. It is observed, from Tables 13 and 14, that the Bayes MCMC estimates of α, λ, ξ, or ζ perform better than the frequentist estimates. For more illustration, the trace and marginal PDFs plots using 5,000 MCMC outputs of α, λ, ξ, and ζ are plotted in Figure 4. It shows that (i) the MCMC technique based on the remaining 5,000 variates converges successfully; (ii) removing the first 1,000 samples as burn-in is an appropriate size to eliminate the influence of the starting values; and (iii) the generated posterior samples of all unknown parameters are fairly symmetrical. As a summary, the results established based on the OC data support the same findings established from the COVID-19 data.
Finally, we concluded that the analysis results developed from the complete lifetimes of coronavirus disease 2019 or ovarian cancer provided a good demonstration of the proposed censoring and may be recommended for examining other novel sampling designs in future work.

Conclusions
The present study introduces a novel sampling technique for life-testing investigations named progressive first-failure censoring, in which the removals follow the beta-binomial probability law. This approach enables the elimination of survival units from a life-test that adheres to a beta-binomial probability distribution when the experiment is being conducted. The maximum likelihood and Bayesian estimations for the unknown parameters of the generalized extreme value distribution have been discussed based on the proposed scheme. Monte Carlo Markov Chain techniques have been employed to derive Bayes estimators utilizing both symmetric and asymmetric balanced loss functions, as closed-form solutions for such estimators have not been available. In addition, the asymptotic confidence interval and highest posterior density interval of each unknown parameter have been estimated. As expected, the computational results showed that the Bayes' approach provides more accurate estimates of the parameters compared to the classical estimates, even if we consider the vague prior. To demonstrate the applicability of the proposed censoring plan in real-world practice, two numerical applications using two clinical data sets have been analyzed. As a future study, one can easily extend the methodologies described here to other lifetime models or to other new censoring mechanisms, e.g., adaptive Type-II progressively hybrid censoring with beta-binomial removals. It is also better to consider generalized extreme value distribution-based modelling for nonlinear functions and fishery data; see, Contreras-Reyes et al. [36]. Lastly, it has been determined that the methodology under discussion offers a highly adaptable approach for conducting life-test experiments, and is therefore recommended for implementation in various fields such as medicine, engineering, chemistry and other areas that necessitate this type of life-test mechanism.

Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.