Skip to main content
Log in

Cure Rate-Based Step-Stress Model

  • Original Article
  • Published:
Journal of Statistical Theory and Practice Aims and scope Submit manuscript

Abstract

In this article, we consider step-stress accelerated life testing (SSALT) models assuming that the time-to-event distribution belongs to the proportional hazard family and the underlying population consists of long-term survivors. Further, with an increase in stress levels, it is natural that the mean time to the event of interest gets shortened and hence a method of obtaining order-restricted maximum likelihood estimators (MLEs) of the model parameters is proposed based on expectation maximization (EM) algorithm coupled with the reparametrization technique. To illustrate the effectiveness of the proposed method, extensive simulation experiments are performed and a real-life data example is analyzed in detail.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Balakrishnan N, Beutner E, Kateri M (2009) Order restricted inference for exponential step-stress models. IEEE Trans Reliab 58:132–142

    Article  Google Scholar 

  2. Bhattacharyya GK, Soejoeti Z (1989) A tampered failure rate model for step-stress accelerated life test. Commun Stat Theory Methods 18:1627–1643

    Article  MathSciNet  MATH  Google Scholar 

  3. Farewell VT (1982) The use of mixture models for the analysis of survival data with long-term survivors. Biometrika, pages 1041–1046

  4. Greven S, Bailer AJ, Kupper LL, Muller KE, Craft JL (2004) A parametric model for studying organism fitness using step-stress experiments. Biometrics 60(3):793–799

    Article  MathSciNet  MATH  Google Scholar 

  5. Hanin L, Huang SL (2014) Identifiability of cure models revisited. J Multivar Anal 130:261–274

    Article  MathSciNet  MATH  Google Scholar 

  6. Kannan N, Kundu D (2020) Simple step-stress models with a cure fraction. Brazilian J Probab Stat 34:2–17

    Article  MathSciNet  MATH  Google Scholar 

  7. Kateri M, Kamps U (2015) Inference in step-stress models based on failure rates. Stat Pap 56:639–660

    Article  MathSciNet  MATH  Google Scholar 

  8. Khamis IH, Higgins JJ (1998) A new model for step-stress testing. IEEE Trans Reliab 47:131–134

    Article  Google Scholar 

  9. Kundu D, Ganguly A (2017) Analysis of step-stress models: existing methods and recent developments. Elsevier/Academic Press, Amsterdam, The Netherlands, 2017

  10. Maller RA, Zhou X (1996) Survival analysis with long-term survivors. Wiley, New York

  11. Pal A, Mitra S, Kundu D (2020) Order restricted classical inference of a Weibull multiple step-stress model. J Appl Stat. https://doi.org/10.1080/02664763.2020.1736526

    Article  MATH  Google Scholar 

  12. Pal A, Samanta D, Mitra S, Kundu D (2021) A simple step-stress model for Lehmann family of distributions. In: Advances in statistics-theory and applications : honoring the contributions of Barry C. Arnold in Statistical Science, Part-VI: 315–343

  13. Samanta D, Ganguly A, Gupta A, Kundu D (2019) On classical and Bayesian order restricted inference for multiple exponential step stress model. Statistics 53(1):177–195

    Article  MathSciNet  MATH  Google Scholar 

  14. Samanta D, Kundu D (2018) Order restricted inference of a multiple step-stress model. Comput Stat Data Anal 117:62–75

    Article  MathSciNet  MATH  Google Scholar 

  15. Samanta D, Kundu D (2021) Meta-analysis of a step-stress experiment under Weibull distribution. J Stat Comput Simul. https://doi.org/10.1080/00949655.2021.1873992

    Article  MathSciNet  MATH  Google Scholar 

  16. Sedyakin NM (1966) On one physical principle in reliability theory. Tech Cybern 3:80–87

    Google Scholar 

  17. Self SG, Liang KL (1987) Asymptotic properties of the maximum likelihood estimators and likelihood ratio test under non-standard conditions. J Am Stat Assoc 82:605–610

    Article  MATH  Google Scholar 

  18. Wang RH, Fei HL (2003) Uniqueness of the maximum likelihood estimate of the weibull distribution tampered failure rate model. Commun Stat Theory Methods 32(12):2321–2338

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank two unknown reviewers for their constructive suggestions which have helped to improve the manuscript significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debasis Kundu.

Ethics declarations

Conflict of interest

The authors do not have any conflict of interests in preparation of the manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Special Issue: AISC-2021 Special Collection” guest edited by Tahani Coolen-Maturi, Javid Shabbir, and Arman Sabbaghi.

Appendix

Appendix

1.1 Proof of Lemma 1

For fixed \(\alpha \), we have \(m_1^{k+1}(p)=\bar{n}_{m+1} \ln p + \displaystyle \sum _{i \in J_{m+2}}(1-\pi _{i}^{(k)})\ln (1-p)+ \displaystyle \sum _{J_{m+2}}\pi _{i}^{(k)}\ln (p)+c,\) where c does not depend on p. Moreover,

$$\begin{aligned} m_1^{k+1'}(p)=\frac{\bar{n}_{m+1}}{p}+\displaystyle \sum _{J_{m+2}}\frac{\pi _{i}^{(k)}}{p}-\displaystyle \sum _{J_{m+2}}\frac{(1-\pi _{i}^{(k)})}{1-p} \end{aligned}$$

and

$$\begin{aligned} \,\,m_1^{k+1''}(p)= -\frac{\bar{n}_{m+1}}{p^2}-\displaystyle \sum _{J_{m+2}}\frac{\pi _{i}^{(k)}}{p^2}-\displaystyle \sum _{J_{m+2}}\frac{(1-\pi _{i}^{(k)})}{(1-p)^2}. \end{aligned}$$

Now, since \(m_1^{k+1''}(p)<0\), it immediately follows that \(m_1^{k+1}(p)\) is a concave function. The result follows immediately by considering the fact that \(m_1^{k+1}(p) \rightarrow -\infty \) as \(p \rightarrow 0\) or \(p \rightarrow 1.\)

1.2 Proof of Theorem 2

We provide the proof for four stress levels for illustration purposes. One can proceed along the similar lines for the general case. For \(m=3,\) the log-likelihood function at the \((k+1)-\) th step (see (10)) becomes

$$\begin{aligned} Q(\varvec{\theta ^{}},\varvec{\pi ^{(k)}})= & {} k_1(p|\varvec{\theta ^{(k)}})+k_2(\varvec{\theta ^*}|\varvec{\theta ^{(k)}}). \end{aligned}$$

From \(k_1(p|\varvec{\theta ^{(k)}}),\) one can obtain the MLE of p explicitly using (13). Also, for fixed \(\alpha ,\) we have

$$\begin{aligned} k_2^{*}(\beta _1,\beta _2,\beta _3,\lambda _4|\varvec{\theta ^{(k)}})= & {} \sum _{k = 1}^{3} \overline{n}_k \ln \beta _k +\overline{n}_{4}\ln \lambda _{4} -\lambda _{4} \big \{\sum \limits _{i=1}^{3} \prod _{j=i}^{m}\beta _j E_i(\alpha )+E_{4}(\alpha )\big \} \nonumber \\{} & {} + \displaystyle \sum _{i=1}^{\bar{n}_{m+1}} \ln f_0(t_{i:n})-\displaystyle \sum _{i=1}^{\bar{n}_{m+1}} \ln [1-F_0(t_{i:n})], \end{aligned}$$
(16)

where \(E_i(\alpha )\) is as defined in (8). The function (16) has a unique maximum at

\((\widehat{\beta }_1^{(\alpha )},\,\widehat{\beta }_2^{(\alpha )},\,\widehat{\beta }_3^{(\alpha )},\,\widehat{\lambda }_{4}^{(\alpha )}),\) where

$$\begin{aligned} \widehat{\beta }_1^{(\alpha )}= & {} \dfrac{n_1E_2(\alpha )}{n_2E_1(\alpha )},\,\widehat{\beta }_2^{(\alpha )}=\dfrac{n_2E_3(\alpha )}{n_3E_2(\alpha )},\,\widehat{\beta }_3^{(\alpha )}=\dfrac{n_3E_4(\alpha )}{n_4E_3(\alpha )},\,\widehat{\lambda }_{4}^{(\alpha )}=\dfrac{n_4}{E_4(\alpha )}. \end{aligned}$$
(17)

Moreover, the function (16) does not have any other local maximum. It is to observe that for fixed \(\alpha \) and given \(\beta _1,\,\beta _2,\,\beta _3,\) the function (16) attains its maximum when

$$\begin{aligned} \widehat{\lambda }_{4}^{(\alpha )}(\beta _1,\beta _2,\beta _3)=\dfrac{\overline{n}_{4}}{\big \{\sum \limits _{i=1}^{3} \prod _{j=i}^{m}\beta _j E_i(\alpha )+E_{4}(\alpha )\big \}}. \end{aligned}$$

Plugging in \( \widehat{\lambda }_{4}^{(\alpha )}(\beta _1,\beta _2,\beta _3)\) in (16) and ignoring the additive constants (depending on \(\alpha \)), the profile log-likelihood function of \(\beta _1,\,\beta _2,\,\beta _3\) can be obtained as

$$\begin{aligned} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sum _{k = 1}^{3} \overline{n}_k \ln \beta _k- \overline{n}_4 \ln \left\{ \sum \limits _{i=1}^{3} \prod _{j=i}^{m}\beta _j E_i(\alpha )+E_{4}(\alpha )\right\} . \end{aligned}$$
(18)

Hence,

$$\begin{aligned} \sup \limits _{\lambda _4 \ge 0,\, 0\le \beta _{1},\beta _2,\beta _3 \le 1} k_2^{*}(\beta _1,\beta _2,\beta _3,\lambda _4|\varvec{\theta ^{(k)}})= \sup \limits _{ 0\le \beta _{1},\beta _2,\beta _3 \le 1} s^{(\alpha )}(\beta _1,\beta _2,\beta _3). \end{aligned}$$

From (18), it is observed that the function \(s^{(\alpha )}(\beta _1,\beta _2,\beta _3)\) has a unique maximum at \((\widehat{\beta }_1^{(\alpha )},\,\widehat{\beta }_2^{(\alpha )},\,\widehat{\beta }_3^{(\alpha )}),\) where \(\widehat{\beta }_1^{(\alpha )},\,\widehat{\beta }_2^{(\alpha )}\,\,\text {and}\,\,\widehat{\beta }_3^{(\alpha )},\) are same as defined in (17), and the function (18) does not have any other local maximum.

Proposition 1

If \(\widehat{\beta }_1^{(\alpha )}>1,\) then

$$\begin{aligned} \sup \limits _{ 0\le \beta _{1} \le 1,\,\beta _2 \ge 0,\,\beta _3\ge 0} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{ \beta _2 \ge 0,\,\beta _3\ge 0} s^{(\alpha )}(1,\beta _2,\beta _3). \end{aligned}$$
(19)

Proof

We prove it by contradiction. If the proposition is not true, then there must exist \(0<\widetilde{\beta }_1^{(\alpha )}<1,\,\widetilde{\beta }_2^{(\alpha )}>0,\,\widetilde{\beta }_3^{(\alpha )}>0,\) such that

$$\begin{aligned} \sup \limits _{ 0\le \beta _{1} \le 1,\,\beta _2 \ge 0,\,\beta _3\ge 0} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} s^{(\alpha )}(\widetilde{\beta }_1^{(\alpha )},\,\widetilde{\beta }_2^{(\alpha )},\,\widetilde{\beta }_3^{(\alpha )}). \end{aligned}$$

Clearly, it implies that \((\widetilde{\beta }_1^{(\alpha )},\,\widetilde{\beta }_2^{(\alpha )},\,\widetilde{\beta }_3^{(\alpha )}) \ne (\widehat{\beta }_1^{(\alpha )},\,\widehat{\beta }_2^{(\alpha )},\,\widehat{\beta }_3^{(\alpha )}) \) is a local maximum of \(s^{(\alpha )}(\beta _1,\beta _2,\beta _3)\) as \(s^{(\alpha )}(\beta _1,\beta _2,\beta _3) \rightarrow -\infty \) as \(\beta _2 \rightarrow \infty \) and as \(s^{(\alpha )}(\beta _1,\beta _2,\beta _3) \rightarrow -\infty \) as \(\beta _3 \rightarrow \infty .\) \(\square \)

Along the same lines, it can be proved that if \(\widehat{\beta }_2^{(\alpha )}>1,\) then

$$\begin{aligned} \sup \limits _{ 0\le \beta _{2} \le 1,\,\beta _1 \ge 0,\,\beta _3\ge 0} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{ \beta _1 \ge 0,\,\beta _3\ge 0} s^{(\alpha )}(\beta _1,1,\beta _3), \end{aligned}$$
(20)

if \(\widehat{\beta }_3^{(\alpha )}>1,\) then

$$\begin{aligned} \sup \limits _{ 0\le \beta _{3} \le 1,\,\beta _1 \ge 0,\,\beta _2\ge 0} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{ \beta _1 \ge 0,\,\beta _2\ge 0} s^{(\alpha )}(\beta _1,\beta _2,1). \end{aligned}$$
(21)

Combining (19) and (20), we can obtain if \(\widetilde{\beta }_1^{(\alpha )}>1\) and \(\widetilde{\beta }_2^{(\alpha )}>1,\) then

$$\begin{aligned} \sup \limits _{ 0\le \beta _{1},\beta _2 \le 1,\,\beta _3 \ge 0} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{ \beta _3\ge 0} s^{(\alpha )}(1,1,\beta _3). \end{aligned}$$
(22)

Similarly from (19) and (21), it can be noted that if \(\widetilde{\beta }_1^{(\alpha )}>1\) and \(\widetilde{\beta }_3^{(\alpha )}>1,\) then

$$\begin{aligned} \sup \limits _{ 0\le \beta _{1},\beta _3 \le 1,\,\beta _2 \ge 0} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{ \beta _2\ge 0} s^{(\alpha )}(1,\beta _2,1), \end{aligned}$$
(23)

and if \(\widetilde{\beta }_2^{(\alpha )}>1\) and \(\widetilde{\beta }_3^{(\alpha )}>1,\) then

$$\begin{aligned} \sup \limits _{ 0\le \beta _{2},\beta _3 \le 1,\,\beta _1 \ge 0} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{ \beta _1\ge 0} s^{(\alpha )}(\beta _1,1,1). \end{aligned}$$
(24)

Finally, it is to note that if \(\widetilde{\beta }_1^{(\alpha )}>1\), \(\widetilde{\beta }_2^{(\alpha )}>1\) and \(\widetilde{\beta }_3^{(\alpha )}>1,\) then

$$\begin{aligned} \sup \limits _{0\le \beta _1,\beta _{2},\beta _3 \le 1} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} s^{(\alpha )}(1,1,1). \end{aligned}$$
(25)

Additionally, we can observe that

$$\begin{aligned} \sup \limits _{ 0\le \beta _1,\beta _{2},\beta _3 \le 1} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{0\le \beta _i\le 1} \sup \limits _{0\le \beta _j\le 1} \sup \limits _{0\le \beta _k\le 1} s^{(\alpha )}(\beta _1,\beta _2,\beta _3) \end{aligned}$$
(26)

for all \(i\ne j\ne k\) and \(1\le i,j,k \le 3.\) Now let us consider the different cases.

Case 1: \(\widetilde{\beta }_1^{(\alpha )}>1,\,\widetilde{\beta }_2^{(\alpha )}>1\,\widetilde{\beta }_3^{(\alpha )}>1\)

The MLEs of \(\beta _1,\) \(\beta _2,\) and \(\beta _3\) are 1,1, and 1, respectively.

Case 2: \(\widetilde{\beta }_1^{(\alpha )}>1,\,\widetilde{\beta }_2^{(\alpha )}>1\,\widetilde{\beta }_3^{(\alpha )}\le 1\)

The MLEs of \(\beta _1\) and \(\beta _2\) are 1 and 1, respectively, and the MLEs of \(\beta _3\) can be obtained as \(\mathop {\textrm{arg}\,\textrm{max}}\limits _{ 0\le \beta _3 \le 1} s^{(\alpha )}(1,1,\beta _3).\) Since, \(s^{(\alpha )}(1,1,\beta _3)\) is an unimodal function, it has a unique maximum. Additionally,

$$\begin{aligned} \sup \limits _{ 0\le \beta _1,\beta _{2},\beta _3 \le 1} s^{(\alpha )}(\beta _1,\beta _2,\beta _3)= & {} \sup \limits _{0\le \beta _3\le 1} s^{(\alpha )}(1,1,\beta _3) \end{aligned}$$
(27)

owing to (26).

Case 3: \(\widetilde{\beta }_1^{(\alpha )}>1,\,\widetilde{\beta }_2^{(\alpha )} \le 1\,\widetilde{\beta }_3^{(\alpha )}\le 1\)

The MLE of \(\beta _1\) is 1 and the MLEs of \(\beta _2\) and \(\beta _3\) can be obtained as \(\mathop {\textrm{arg}\,\textrm{max}}\nolimits _{ 0\le \beta _{2},\beta _3 \le 1} s^{(\alpha )}(1,\beta _2,\beta _3).\) The function \(s^{(\alpha )}(1,\beta _2,\beta _3)\) has a unique maximum and we can repeat the same argument as before.

1.3 Proof of Theorem 3

We will provide the proof mainly for five stress level, i.e., for \(m=4\), although the same proof holds for the general case also. In case \(n_1=0\), the MLEs of all the parameters will not exist and therefore we will not consider this case. Note that the order-restricted MLEs still exist even if there is no failure at the last stress level. We will consider this case too in our proof.

Case 1: Exactly one internal stress level with zero failure.

Here we consider \(m=4\) and without loss of generality let \({n}_3=0\). The form of \(k_2^*(\varvec{\eta } \vert \varvec{\theta ^{(k)}})\) without the additive constant is given by

$$\begin{aligned}{} & {} k_2^*(\lambda _5, \beta _1, \beta _2, \beta _3, \beta _4)\nonumber \\{} & {} \quad = n_1 \ln \beta _1 + ({n}_1+{n}_2) \ln \beta _2 + ({n}_1+{n}_2) \ln \beta _3 - ({n}_1+n_2+{n}_4) \ln \beta _4 \nonumber \\{} & {} \qquad - ({n}_1+n_2+{n}_4+{n}_5) \ln \lambda _5 - \lambda _5 \beta _1 \beta _2 \beta _3 \beta _4 E_1(\alpha ) - \lambda _5 \beta _2 \beta _3 \beta _4 E_2(\alpha ) \nonumber \\{} & {} \qquad - \lambda _5 \beta _3 \beta _4 E_3(\alpha ) - \lambda _5 \beta _4 E_4(\alpha ) - \lambda _5 E_5 (\alpha ). \end{aligned}$$
(28)

For a fixed \(\beta _2\), the function (28) is maximized when

$$\begin{aligned} \widehat{\lambda }_5 = \frac{n_5}{E_5 (\alpha )}, \widehat{\beta }_1(\beta _2) = \frac{{n}_1(\beta _2 E_2(\alpha + E_3 (\alpha ))}{n_2\beta _2 E_1 (\alpha )},\nonumber \\ \widehat{\beta }_3(\beta _2) = \frac{{n}_2 E_4(\alpha )}{{n}_4(\beta _2 E_2(\alpha ) + E_3(\alpha ))}, \widehat{\beta }_4(\beta _2) = \frac{{n}_4 E_5(\alpha )}{{n}_5 E_4(\alpha )}. \end{aligned}$$

Therefore, the profile log-likelihood of \(\beta _2\) without the additive constant is given by

$$\begin{aligned} l_2(\beta _2) = {n}_2 \ln \beta _2 - {n}_2 \ln (\beta _2 E_2 (\alpha ) + E_3(\alpha )). \end{aligned}$$

Since,

$$\begin{aligned} \frac{dl_2(\beta _2)}{d\beta _2} = \frac{{n}_2 E_3(\alpha )}{\beta _2(\beta _2 E_2(\alpha ) + E_3(\alpha ))} \ge 0, \end{aligned}$$

the profile log-likelihood of \(\beta _2\) is an increasing function of \(\beta _2\) \((0< \beta _2 \le 1)\). Hence, the maximum occurs at \(\beta _2=1\). Therefore,

$$\begin{aligned} \sup _{{\mathop {0 \le \beta _1, \beta _2, \beta _3, \beta _4 \le 1}\limits ^{\lambda _5 \ge 0}}} k_2^*(\lambda _5, \beta _1, \beta _2, \beta _3, \beta _4) = \sup _{{\mathop {0 \le \beta _1, \beta _3, \beta _4 \le 1}\limits ^{\lambda _5 \ge 0}}} k_2^*(\lambda _5, \beta _1, 1, \beta _3, \beta _4) \end{aligned}$$

Case 2: Zero failure at two consecutive stress levels including the last stress level.

Here we have assumed that \({n}_4=0\) and \({n}_5=0.\) The form of \(k_2^*(\varvec{\eta } \vert \varvec{\theta ^{(k)}})\) without the additive constant is given by

$$\begin{aligned}{} & {} k_2^*(\lambda _5, \beta _1, \beta _2, \beta _3, \beta _4) \nonumber \\{} & {} \quad = n_1 \ln \beta _1 + ({n}_1+{n}_2) \ln \beta _2 + ({n}_1+{n}_2 + n_3) \ln \beta _3 + ({n}_1+n_2+{n}_3) \ln \beta _4 \nonumber \\{} & {} \qquad + ({n}_1+n_2+{n}_3) \ln \lambda _5 - \lambda _5 \beta _1 \beta _2 \beta _3 \beta _4 E_1(\alpha )\nonumber \\{} & {} \qquad - \lambda _5 \beta _2 \beta _3 \beta _4 E_2(\alpha ) \nonumber \\{} & {} \qquad - \lambda _5 \beta _3 \beta _4 E_3(\alpha ) - \lambda _5 \beta _4 E_4(\alpha ) - \lambda _5 E_5 (\alpha ). \end{aligned}$$
(29)

For a fixed \(\beta _3\) and \(\beta _4\), the function (29) is maximized when

$$\begin{aligned} \widehat{\lambda }_5 (\beta _3, \beta _4)= \frac{n_3}{\beta _3 \beta _4 E_3 (\alpha ) + \beta _4 E_4 (\alpha ) + E_5 (\alpha )}, \ \ \ \ \widehat{\beta }_1(\beta _3 \beta _4) = \frac{{n}_1 E_2 (\alpha )}{n_2 E_1 (\alpha )}, \ \ \ \\ \text {and } \ \ \ \widehat{\beta }_2(\beta _3, \beta _4) = \frac{{n}_2 (\beta _3 \beta _4 E_3 (\alpha ) + \beta _4 E_4 (\alpha ) + E_5 (\alpha ))}{{n}_3\beta _3 \beta _4 E_2(\alpha )}. \ \ \ \ \end{aligned}$$

Therefore, the profile log-likelihood of \(\beta _3\) and \(\beta _4\) without the additive constant is given by

$$\begin{aligned} l_{34}(\beta _3,\beta _4) = n_3 \ln \beta _3 +n_3 \ln \beta _4 - {n}_3 \ln (\beta _3 \beta _4 E_3 (\alpha ) + \beta _4 E_4 (\alpha ) + E_5 (\alpha )). \end{aligned}$$

Since,

$$\begin{aligned} \frac{dl_{34}(\beta _3,\beta _4)}{d\beta _3} = \frac{{n}_3(\beta _4 E_4(\alpha ) + E_5(\alpha ))}{\beta _3(\beta _3 \beta _4 E_3(\alpha ) + \beta _4 E_4(\alpha ) + E_5(\alpha ))} \ge 0, \end{aligned}$$

the profile log-likelihood is an increasing function of \(\beta _3\) \((0< \beta _3 \le 1)\). Similarly it can also be shown that the profile log-likelihood is an increasing function of \(\beta _4\) \((0< \beta _4 \le 1)\). Hence, the maximum occurs at \(\beta _3=1\) and \(\beta _4=1\). Therefore,

$$\begin{aligned} \sup _{{\mathop {0 \le \beta _1, \beta _2, \beta _3, \beta _4 \le 1}\limits ^{\lambda _5 \ge 0}}} k_2^*(\lambda _5, \beta _1, \beta _2, \beta _3, \beta _4) = \sup _{{\mathop {0 \le \beta _1, \beta _2 \le 1}\limits ^{\lambda _5 \ge 0}}} k_2^*(\lambda _5, \beta _1, \beta _2, 1, 1) \end{aligned}$$

Other cases can also be done using the method proposed by [13]. Hence, the result follows. In general, it can be shown that the MLE of \(\beta _{k-1}=1\) if \(n_{k}=0\) for \(k=2, \ldots m+1\).

1.4 Elements of Fisher Information Matrix

Suppose \(\varvec{\theta }=(\theta _1,\theta _2,\theta _3,\theta _4,\theta _5)=(p,\alpha ,\lambda _{1},\lambda _{2},\lambda _{3}),\) and \( I_3(\varvec{\theta }) = \Bigg (\Bigg (\dfrac{\partial ^2 l(\varvec{\theta }|\mathcal{D})}{\partial \theta _i \partial \theta _j }\Bigg )\Bigg ),\,i,j=1,2,\ldots ,5,\) be the observed Fisher information matrix for three stress levels. Note that here \(l=l(\varvec{\theta }|\mathcal {D})\) is the log-likelihood function as defined in (3) for \(m=2.\) The elements of the observed Fisher information matrix can be expressed as follows:

$$\begin{aligned} I_3(\varvec{\theta })= \begin{bmatrix} -\frac{\partial ^2 l}{\partial p^2} &{} -\frac{\partial ^2 l}{\partial p \partial \alpha } &{} -\frac{\partial ^2 l}{\partial p \partial \lambda _{1}} &{} -\frac{\partial ^2 l}{\partial p \partial \lambda _{2}}&{} -\frac{\partial ^2 l}{\partial p \partial \lambda _{3}}\\ -\frac{\partial ^2 l}{\partial \alpha \partial p } &{} -\frac{\partial ^2 l}{\partial \alpha ^2} &{} -\frac{\partial ^2 l}{\partial \alpha \partial \lambda _{1}} &{} -\frac{\partial ^2 l}{\partial \alpha \partial \lambda _{2}} &{} -\frac{\partial ^2 l}{\partial \alpha \partial \lambda _{3}}\\ -\frac{\partial ^2 l}{\partial \lambda _{1} \partial p } &{} -\frac{\partial ^2 l}{\partial \lambda _{1} \partial \alpha } &{} -\frac{\partial ^2 l}{\partial \lambda _{1}^2} &{} -\frac{\partial ^2 l}{\partial \lambda _{1} \partial \lambda _{2}} &{} -\frac{\partial ^2 l}{\partial \lambda _{1} \partial \lambda _{3}}\\ -\frac{\partial ^2 l}{\partial \lambda _{2} \partial p } &{}-\frac{\partial ^2 l}{\partial \lambda _{2} \partial \alpha } &{} -\frac{\partial ^2 l}{\partial \lambda _{2} \partial \lambda _{1} } &{} -\frac{\partial ^2 l}{\partial \lambda _{2}^2} &{} -\frac{\partial ^2 l}{\partial \lambda _2 \partial \lambda _{3}} \\ -\frac{\partial ^2 l}{\partial \lambda _{3} \partial p } &{}-\frac{\partial ^2 l}{\partial \lambda _{3} \partial \alpha } &{} -\frac{\partial ^2 l}{\partial \lambda _{3} \partial \lambda _{1}} &{} -\frac{\partial ^2 l}{\partial \lambda _{3} \partial \lambda _{2}} &{} -\frac{\partial ^2 l}{\partial \lambda _{3}^2} \end{bmatrix}, \end{aligned}$$

where

$$\begin{aligned} \frac{\partial ^2 l}{\partial p^2}= & {} -\frac{\bar{n}_3}{p^2}-\frac{(n-\bar{n}_3)\{1-g(\varvec{\theta })\}^2}{(1-p+pg(\varvec{\theta }))^2},\,\,\,\,\,\,\,\,\, \frac{\partial ^2 l}{\partial p \partial \alpha } =\frac{(n-\bar{n}_3)g(\varvec{\theta })k(\varvec{\theta })}{(1-p+pg(\varvec{\theta }))^2}\\ \frac{\partial ^2 l}{\partial p \partial \lambda _1}= & {} -\frac{(n-\bar{n}_3)g(\varvec{\theta }) \tau _1^{\alpha }}{(1-p+pg(\varvec{\theta }))^2},\,\,\,\,\,\,\,\,\, \frac{\partial ^2 l}{\partial p \partial \lambda _2} =-\frac{(n-\bar{n}_3)g(\varvec{\theta })(\tau _2^{\alpha }- \tau _1^{\alpha })}{(1-p+pg(\varvec{\theta }))^2},\\ \frac{\partial ^2 l}{\partial p \partial \lambda _3}= & {} -\frac{(n-\bar{n}_3)g(\varvec{\theta }) (\tau ^{\alpha }- \tau _2^{\alpha })}{(1-p+pg(\varvec{\theta }))^2},\\ \frac{\partial ^2{l}}{\partial \alpha ^2}= & {} -\frac{\bar{n}_3}{\alpha ^2}-\displaystyle \sum _{j=1}^{3}\lambda _j D_j''(\alpha )+ p(n-\bar{n}_3)a(\varvec{\theta }), \\ \frac{\partial ^2{l}}{\partial \lambda _1^2}= & {} - \frac{n_{1}}{\lambda _1^2} + \frac{(n-\bar{n}_3)g(\varvec{\theta })p(1-p)\tau _1^{2\alpha }}{(1-p+pg(\varvec{\theta }))^2}, \\ \frac{\partial ^2{l}}{\partial \lambda _2^2}= & {} - \frac{n_{2}}{\lambda _2^2}+\frac{(n-\bar{n}_3)g(\varvec{\theta })p(1-p)(\tau _2^{\alpha }-\tau _1^{\alpha })^2}{(1-p+pg(\varvec{\theta }))^2} , \\ \frac{\partial ^2{l}}{\partial \lambda _3^2}= & {} - \frac{n_{3}}{\lambda _3^2}+\frac{(n-\bar{n}_3)g(\varvec{\theta })p(1-p)(\tau ^{\alpha }-\tau _2^{\alpha })^2}{(1-p+pg(\varvec{\theta }))^2},\\ \frac{\partial ^2{l}}{\partial \alpha \partial \lambda _1}= & {} - D_1'(\alpha )-(n-\bar{n}_3)p\Bigg \{\frac{\tau _1^{\alpha }g(\varvec{\theta })k(\varvec{\theta })+\tau _1^{\alpha }\ln (\tau _1) g(\varvec{\theta })}{(1-p+pg(\varvec{\theta }))}- \frac{\tau _1^{\alpha }[g(\varvec{\theta })]^2k(\varvec{\theta }) p}{(1-p+pg(\varvec{\theta }))^2}\Bigg \},\\ \frac{\partial ^2{l}}{\partial \alpha \partial \lambda _2}= & {} - D_2'(\alpha )-p(n-\bar{n}_3) b(\varvec{\theta }),\,\,\,\,\,\, \frac{\partial ^2{l}}{\partial \alpha \partial \lambda _3} = - D_3'(\alpha )-p(n-\bar{n}_3) c(\varvec{\theta }),\\ \frac{\partial ^2{l}}{\partial \lambda _1 \partial \lambda _2}= & {} (n-\bar{n}_3)p(1-p)\frac{g(\varvec{\theta })\tau _1^{\alpha }(\tau _2^{\alpha }-\tau _1^{\alpha })}{(1-p+pg(\varvec{\theta }))^2} \\ \frac{\partial ^2{l}}{\partial \lambda _2 \partial \lambda _3}= & {} (n-\bar{n}_3)p(1-p)\frac{g(\varvec{\theta })(\tau _2^\alpha -\tau _1^{\alpha })(\tau ^{\alpha }-\tau _2^{\alpha })}{(1-p+pg(\varvec{\theta }))^2} \\ \frac{\partial ^2{l}}{\partial \lambda _1 \partial \lambda _3}= & {} (n-\bar{n}_3)p(1-p)\frac{g(\varvec{\theta })\tau _1^{\alpha }(\tau ^{\alpha }-\tau _2^{\alpha })}{(1-p+pg(\varvec{\theta }))^2} \\ g(\varvec{\theta })= & {} e^{-(\lambda _1-\lambda _2)\tau _1^{\alpha }-(\lambda _2-\lambda _3)\tau _2^{\alpha }-\lambda _3 \tau ^{\alpha }},\\ k(\varvec{\theta })= & {} -(\lambda _1-\lambda _2)\tau _1^{\alpha }\ln {\tau _1}-(\lambda _2-\lambda _3)\tau _2^{\alpha }\ln {\tau _2}-\lambda _3 \tau ^{\alpha }\ln {\tau }, \\ a(\varvec{\theta })= & {} \Bigg \{\frac{(1-p+pg(\varvec{\theta }))\{g(\varvec{\theta })m(\varvec{\theta })+g(\varvec{\theta })[k(\varvec{\theta })]^2 \}-p[g(\varvec{\theta })]^2[k(\varvec{\theta })]^2}{(1-p+pg(\varvec{\theta }))^2}\Bigg \},\\ b(\varvec{\theta })= & {} \Bigg \{\frac{(\tau _2^{\alpha }-\tau _1^{\alpha })g(\varvec{\theta })k(\varvec{\theta })+(\tau _2^{\alpha }\ln (\tau _2)-\tau _1^{\alpha }\ln (\tau _1) g(\varvec{\theta })}{(1-p+pg(\varvec{\theta }))}- \frac{(\tau _2^{\alpha }-\tau _1^{\alpha })[g(\varvec{\theta })]^2k(\varvec{\theta }) p}{(1-p+pg(\varvec{\theta }))^2}\Bigg \},\\ c(\varvec{\theta })= & {} \Bigg \{\frac{(\tau ^{\alpha }-\tau _2^{\alpha })g(\varvec{\theta })k(\varvec{\theta })+(\tau ^{\alpha }\ln (\tau )-\tau _2^{\alpha }\ln (\tau _2) g(\varvec{\theta })}{(1-p+pg(\varvec{\theta }))}- \frac{(\tau ^{\alpha }-\tau _2^{\alpha })[g(\varvec{\theta })]^2k(\varvec{\theta }) p}{(1-p+pg(\varvec{\theta }))^2}\Bigg \},\\ m(\varvec{\theta })= & {} -[(\lambda _1-\lambda _2)\tau _1^{\alpha }(\ln {\tau _1})^2+(\lambda _2+\lambda _3)\tau _2^{\alpha }(\ln {\tau _2})^2+\lambda _3 \tau ^{\alpha }(\ln {\tau })^2], \\ D_{1}(\alpha )= & {} \sum _{i=1}^{\overline{n}_{1}}t_{i:n}^{\alpha } + (\overline{n}_{3}-\overline{n}_{1})\tau _{1}^{\alpha } , \, \, \, D_{2}(\alpha ) =\sum _{i=\overline{n}_{1}+1}^{\overline{n}_{2}}t_{i:n}^{\alpha } + (\overline{n}_{3}-\overline{n}_{2})\tau _{2}^{\alpha }- (\overline{n}_{3}-\overline{n}_{1})\tau _{1}^{\alpha } , \\ D_{3}(\alpha )= & {} \sum _{i=\overline{n}_{2}+1}^{\overline{n}_{3}}t_{i:n}^{\alpha } - (\overline{n}_{3}-\overline{n}_{2})\tau _{2}^{\alpha } ,\\ D_{1}'(\alpha )= & {} \sum _{i=1}^{\overline{n}_{1}}t_{i:n}^{\alpha }\ln t_{i:n} + (\overline{n}_{3}-\overline{n}_{1})\tau _{1}^{\alpha }\ln \tau _1 , \\ D_{2}'(\alpha )= & {} \sum _{i=\overline{n}_{1}+1}^{\overline{n}_{2}}t_{i:n}^{\alpha } \ln t_{i:n} + (\overline{n}_{3}-\overline{n}_{2})\tau _{2}^{\alpha }\ln \tau _2- (\overline{n}_{3}-\overline{n}_{1})\tau _{1}^{\alpha }\ln \tau _1 , \\ D_{3}'(\alpha )= & {} \sum _{i=\overline{n}_{2}+1}^{\overline{n}_{3}}t_{i:n}^{\alpha }\ln t_{i:n} - (\overline{n}_{3}-\overline{n}_{2})\tau _{2}^{\alpha } \ln \tau _2 , \\ D_{1}''(\alpha )= & {} \sum _{i=1}^{\overline{n}_{1}}t_{i:n}^{\alpha }(\ln t_{i:n})^2 + (\overline{n}_{3}-\overline{n}_{1})\tau _{1}^{\alpha }(\ln \tau _1)^2 , \\ D_{2}''(\alpha )= & {} \sum _{i=\overline{n}_{1}+1}^{\overline{n}_{2}}t_{i:n}^{\alpha } (\ln t_{i:n})^2 + (\overline{n}_{3}-\overline{n}_{2})\tau _{2}^{\alpha }(\ln \tau _2)^2- (\overline{n}_{3}-\overline{n}_{1})\tau _{1}^{\alpha }(\ln \tau _1)^2 , \\ D_{3}''(\alpha )= & {} \sum _{i=\overline{n}_{2}+1}^{\overline{n}_{3}}t_{i:n}^{\alpha }(\ln t_{i:n})^2 - (\overline{n}_{3}-\overline{n}_{2})\tau _{2}^{\alpha } (\ln \tau _2)^2 . \end{aligned}$$

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pal, A., Samanta, D. & Kundu, D. Cure Rate-Based Step-Stress Model. J Stat Theory Pract 17, 15 (2023). https://doi.org/10.1007/s42519-022-00313-4

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42519-022-00313-4

Keywords

Navigation