Skip to main content
Log in

Jackknife empirical likelihood for linear transformation models with right censoring

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

A class of linear transformation models with censored data was proposed as a generalization of Cox models in survival analysis. This paper develops inference procedure for regression parameters based on jackknife empirical likelihood approach. We can show that the limiting variance is not necessary to estimate and the Wilk’s theorem can be obtained. The jackknife empirical likelihood method benefits from the simpleness in optimization using jackknife pseudo-value. In our simulation studies, the proposed method is compared with the traditional empirical likelihood and normal approximation methods in terms of coverage probability and computational cost.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Aalen, O., Borgan, O., Gjessing, H. (2008). Survival and event history analysis: a process point of view. New York: Springer.

  • Andersen, P. K., Gill, R. D. (1982). Cox’s regression model for counting process: a large sample study. Annals of Statistics, 10, 1100–1120.

  • Bennett, S. (1983). Analysis of survival data by the proportional odds model. Statistics in Medicine, 2, 273–277.

    Article  Google Scholar 

  • Bickel, P. J., Doksum, K. (1981). An analysis of transformations revisited. Journal of the American Statistical Association, 76, 296–311.

  • Box, G. E. P., Cox, D. R. (1964). An analysis of transformations. Journal of The Royal Statistical Society Series, 26, 211–252.

  • Breiman, L., Friedman, J. H. (1985). Estimating optimal transformations for multiple regression and correlation. Journal of the American Statistical Association, 80, 580–619.

  • Cai, J., Fan, J., Li, R., Zhou, H. (2005). Variable selection for multivariate failure time data. Biometrika, 92, 303–316.

  • Cai, T., Wei, L. J., Wilcox, M. (2000). Semiparametric regression analysis for clustered failure time data. Biometrika, 87, 867–878.

  • Carroll, R. J., Ruppert, D. (1984). Power transformation when fitting theoretical models to data. Journal of the American Statistical Association, 79, 321–328.

  • Chen, K., Jin, Z., Ying, Z. (2002). Semiparametric analysis of transformation models with censored data. Biometrika, 89, 659–668.

  • Chen, S., Peng, L., Qin, Y. (2009). Effects of data dimension on empirical likelihood. Biometrika, 96, 711–722.

  • Cheng, S. C., Wei, L. J., Ying, Z. (1995). Analysis of transformation models with censored data. Biometrika, 82, 835–845.

  • Cox, D. (1972). Regression models and life tables. Journal of the Royal Statistical Society: Series B, 34, 187–220.

    MathSciNet  MATH  Google Scholar 

  • Fine, J. P., Ying, Z., Wei, L. J. (1998). On the linear transformation model with censored data. Biometrika, 85, 980–986.

  • Jing, B. Y., Yuan, J. Q., Zhou, W. (2009). Jackknife empirical likelihood. Journal of the American Statistical Association, 104, 1224–1232.

  • Kong, L., Cai, J., Sen, P. (2006). Asymptotic results for fitting semiparametric transformation models to failure time data from case-cohort studies. Statistica Sinica, 16, 135–151.

  • Lahiri, S., Mukhopadhyay, S. (2012). A penalized empirical likelihood method in high dimensions. Annals of Statistics, 40, 2511–2540.

  • Lee, A. J. (1990). U-Statistics. Theory and Practice. New York: CRC Press.

  • Owen, A. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika, 75, 237–249.

    Article  MathSciNet  MATH  Google Scholar 

  • Owen, A. (1990). Empirical likelihood and confidence regions. Annals of Statistics, 18, 90–120.

    Article  MathSciNet  MATH  Google Scholar 

  • Owen, A. (2001). Empirical Likelihood. Florida: Chapman and Hall/CRC.

    Book  MATH  Google Scholar 

  • Pettitt, A. N. (1982). Inference for the linear model using a likelihood based on ranks. Journal of the Royal Statistical Society: Series B, 44, 234–243.

    MathSciNet  MATH  Google Scholar 

  • Qin, J., Lawless, J. (1994). Empirical likelihood and general estimating equations. Annals of Statistics, 22, 300–325.

  • Tang, C. Y., Leng, C. (2010). Penalized high-dimensional empirical likelihood. Biometrika, 4, 905–920.

  • Yu, W., Sun, Y., Zheng, M. (2011). Empirical likelihood method for linear transformation models. Annals of the Institute of Statistical Mathematics, 63, 331–346.

  • Yang, H., Zhao, Y. (2012). New empirical likelihood inference for linear transformation models. Journal of Statistical Planning and Inference, 142, 1659–1668.

  • Zhang, Z., Zhao, Y. (2013). Empirical likelihood for linear transformation models with interval-censored failure time data. Journal of Multivariate Analysis, 116, 398–409.

  • Zhao, Y. (2010). Semiparametric inference for transformation models via empirical likelihood. Journal of Multivariate Analysis, 101, 1846–1858.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the two referees and AE for their useful suggestions which lead to a significant improvement. Hanfang Yang’s research is supported by the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (13XNF059). Yichuan Zhao was partially supported by the NSF grant DMS-1406163, NSA grant H98230-12-1-0209, and the RIG grant, Georgia State University. The authors acknowledge the help in language by Jenny Zhao.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yichuan Zhao.

Proofs of Theorems

Proofs of Theorems

Denote \(\Gamma (\theta _{0})=\Gamma _{1}(\theta _{0})-\Gamma _{2}(\theta _{0})\), which is a limiting covariance matrix for \(n^{(-3/2)}\Omega _{w}(\theta _{0})\) defined by Fine et al. (1998), where

$$\begin{aligned}&\Gamma _{1}(\theta )=\displaystyle \lim _{n \rightarrow \infty } \frac{1}{n^{3}}\sum _{i=1}^{n}\sum _{j=1, j\ne i}^{n}\sum _{k=1, k\ne j}^{n} \{e_{ij}(\theta )+e_{ji}(\theta )\}\{e_{ik}(\theta )+e_{ki}(\theta )\}^{T},\\&\Gamma _{2}(\theta )=4\int _{0}^{t_{0}}\frac{q(\theta ,t)q^{T}(\theta ,t)}{\pi (t)}\, d\Lambda _{G}(t). \end{aligned}$$

Lemma 1

Under the conditions of Theorem 1,

$$\begin{aligned} \sqrt{n}\frac{1}{n}\sum _{l=1}^{n}\hat{Q}_l(\theta _0)=\sqrt{n}\hat{V}(\theta _{0})\overset{\mathfrak {D}}{\longrightarrow } N(0,4\Gamma (\theta _{0})), \end{aligned}$$

as \(n\rightarrow \infty \).

Proof

Noticing that \(\hat{G}(\cdot )\) and \(\hat{\Lambda }_G(t)\) in \(\hat{b}(U_i,U_j;\theta _0)\) are estimated by the full sample, we re-express \(\sum _{l=1}^{n}\hat{Q}_l\) as follows,

$$\begin{aligned} \sum _{l=1}^{n}\hat{Q}_l(\theta _0)&= \sum _{l=1}^{n} n \hat{V}(\theta _0) -\sum _{l=1}^{n} (n-1)\hat{V}_l(\theta _0) \nonumber \\&= n^2\hat{V}(\theta _0)-\frac{1}{n-2} \sum _{l=1}^{n} \sum _{i=1,i \ne l}^{n}\sum _{j=1,j \ne i, l}^{n}\left\{ \hat{b}(U_i,U_j;\theta _0) \right\} \nonumber \\&= n^2\hat{V}(\theta _0)- \frac{1}{n-2} \sum _{\{i,j,l| 1\le i,j,l\le n, i\ne j, j\ne l, l\ne i \}}^{n} \left\{ \hat{b}(U_i,U_j;\theta _0) \right\} \nonumber \\&= n^2\hat{V}(\theta _0)- \frac{n-2}{n-2} \sum _{i=1}^{n} \sum _{j=1,j\ne i}^{n} \left\{ \hat{b}(U_i,U_j;\theta _0) \right\} \nonumber \\&= n^2\hat{V}(\theta _0)-n(n-1)\cdot \frac{1}{n(n-1)} \sum _{i=1}^{n} \sum _{j=1,j\ne i}^{n} \left\{ \hat{b}(U_i,U_j;\theta _0) \right\} \nonumber \\&= n^2\hat{V}(\theta _0) -n(n-1)\hat{V}(\theta _0) \nonumber \\&= n\hat{V}(\theta _0). \end{aligned}$$
(6)

Thus, the following result holds by Lemma A.1 of Yang and Zhao (2012),

$$\begin{aligned} \sqrt{n}\frac{1}{n}\sum _{l=1}^{n}\hat{Q}_l(\theta _0)= \sqrt{n}\hat{V}(\theta _{0}) \overset{\mathfrak {D}}{\longrightarrow } N(0,4\Gamma (\theta _{0})). \end{aligned}$$
(7)

\(\square \)

Lemma 2

Denote

$$\begin{aligned} V_l(\theta )=\frac{1}{(n-1)(n-2)}\sum _{i=1,i \ne l}^{n}\sum _{j=1,j \ne i, l}^{n}\left\{ b(U_i,U_j;\theta ) \right\} , \end{aligned}$$

and

$$\begin{aligned} Q_l(\theta )=\, nV(\theta )-(n-1)V_l(\theta ). \end{aligned}$$

Under the conditions of Theorem 1, let

$$\begin{aligned} \Pi _n(\theta _0)=\, \frac{1}{n}\sum _{l=1}^{n}Q_l(\theta _0)Q_l^T(\theta _0), \end{aligned}$$

and

$$\begin{aligned} \hat{\Pi }_n(\theta _0)=\, \frac{1}{n}\sum _{l=1}^{n}\hat{Q}_l(\theta _0)\hat{Q}_l^T(\theta _0). \end{aligned}$$

Then, we have (a) \(\Pi _n(\theta _0)\overset{P}{\longrightarrow }4\Gamma (\theta _0)\); ii) \(\hat{\Pi }_n(\theta _0)\overset{P}{\longrightarrow }4\Gamma (\theta _0)\).

Proof

This lemma is to prove that variance of \(\hat{Q}_l(\theta _0)\), denoted as \(\hat{\Pi }_n(\theta _0)\), whose components are estimated by random samples, converges to \(4\Gamma (\theta _0)\) in probability. It is not a strictly full version of traditional jackknife sample in \(\hat{Q}_l(\theta _0)\), which means that we cannot apply the properties of jackknife EL to our method directly. Note that \(b(U_i,U_j;\theta _0)\) depends on both i-th and the j-th samples, since the terms \(q(\theta ,t)\), \(\pi (t)\), \(G(\cdot )\), \(\Lambda _G(t)\) are determined, instead of random. Therefore, what we are required to do is to: (a) prove that \(\Pi _n(\theta _0)\) converges to \(4\Gamma (\theta _0)\) in probability; (b) verify that the gap between \(\Pi _n(\theta _0)\) and \(\hat{\Pi }_n(\theta _0)\) is close to zero in probability. The details are given as follows.

For (a), similar to Lemma A.3 of Jing et al. (2009), one has

$$\begin{aligned} \nonumber \Pi _n(\theta _0)= & {} \frac{1}{n}\sum _{l=1}^{n}Q_l(\theta _0)Q_l^T(\theta _0)\\ \nonumber= & {} \frac{1}{n}\sum _{l=1}^{n} \left\{ Q_l(\theta _0) Q_l^T(\theta _0) -2Q_l(\theta _0)V^T(\theta _0)+ V(\theta _0)V^T(\theta _0) \right\} +V(\theta _0)V^T(\theta _0) \\= & {} \frac{1}{n} \sum _{l=1}^{n}(Q_l(\theta _0)-V(\theta _0)) (Q_l(\theta _0)-V(\theta _0))^T+V(\theta _0)V(\theta _0)^T. \end{aligned}$$
(8)

Like equation (A.4) in Yang and Zhao (2012), it is clear that

$$\begin{aligned} \text{ Var }(V(\theta _0))=\, \frac{4\Gamma (\theta _0)}{n}+o\left( n^{-1}\right) ,\text { a.s.} \end{aligned}$$
(9)

Denote the jackknife estimator of \(\text{ Var }(V(\theta _0))\) by \(\hat{\text{ Var }}(jack)\equiv \frac{1}{n(n-1)}\sum _{l=1}^{n}(Q_l(\theta _0)-V(\theta _0)) (Q_l(\theta _0)-V(\theta _0))^T\), and it is consistent with \(\text{ Var }(V(\theta _0))\) from Lee (1990), i.e.,

$$\begin{aligned} n\left[ \hat{\text{ Var }}(jack)-\text{ Var }(V(\theta _0))\right] \rightarrow 0,\text { a.s.} \end{aligned}$$
(10)

Thus, combining (9) and (10), the first part of (8) equals to

$$\begin{aligned} \frac{1}{n} \sum _{l=1}^{n}(Q_l(\theta _0)-V(\theta _0)) (Q_l(\theta _0)-V(\theta _0))^T= & {} (n-1)\hat{\text{ Var }}(jack) \\= & {} 4\Gamma (\theta _0)+o(1). \end{aligned}$$

Note that \({1}/{n}\sum _{i=1}^{n}Q_l(\theta _0)=V(\theta _0)\), which is similar to (6). Combining (9) and the Law of Large Numbers for U-statistics, we obtain \(V(\theta _0)=O(n^{-1/2})\). Therefore, (8) equals to

$$\begin{aligned} \Pi _n(\theta _0)=4\Gamma (\theta _0)+o(1),\text { a.s.}, \end{aligned}$$

i.e.,

$$\begin{aligned} \Pi _n(\theta _0)\overset{P}{\longrightarrow }4\Gamma (\theta _0). \end{aligned}$$
(11)

For (b), to prove the difference between \(\Pi _n(\theta _0)\) and \(\hat{\Pi }_n(\theta _0)\) is close to zero in probability, we need to check that \(|\hat{Q}_l(\theta _0)-Q_l(\theta _0)|=o_p(1)\) first. Like Yang and Zhao (2012), it is clear that

$$\begin{aligned} \sup _{i,j=1,\ldots ,n}\left| \hat{b}(U_{i}, U_{j}; \theta _{0})-b(U_{i}, U_{j}; \theta _{0})\right| = o_p(1). \end{aligned}$$

Then, according to the definition of \(Q_l(\theta _0)\), it can be re-expressed as follows.

$$\begin{aligned} Q_l(\theta _0)= & {} nV(\theta _0)-(n-1)V_l(\theta _0) \\= & {} \frac{n}{n(n-1)}\sum _{i=1}^{n} \sum _{j=1,j\ne i}^{n}\left\{ b(U_i,U_j;\theta _0) \right\} \\&-\frac{n-1}{(n-1)(n-2)}\sum _{i=1,i\ne l}^{n} \sum _{j=1, j\ne i,l}^{n}\left\{ b(U_i,U_j;\theta _0) \right\} \\= & {} \frac{1}{n-1}\sum _{j=1,j\ne l}^{n}\left\{ b(U_l,U_j;\theta _0) \right\} + \frac{1}{n-1}\sum _{i=1,i\ne l}^{n}\left\{ b(U_i,U_l;\theta _0) \right\} \\&- \frac{1}{(n-1)(n-2)}\sum _{i=1,i\ne l}^{n} \sum _{j=1, j\ne i,l}^{n}\left\{ b(U_i,U_j;\theta _0) \right\} \\= & {} 2W_l(\theta _0)-V_l(\theta _0), \end{aligned}$$

as \( b(U_i,U_j;\theta _0) \) is symmetric for any i and j. Similarly,

$$\begin{aligned} \hat{Q}_l(\theta _0)= & {} n\hat{V}(\theta _0)-(n-1)\hat{V_l}(\theta _0) \\= & {} \frac{n}{n(n-1)}\sum _{i=1}^{n} \sum _{j=1,j\ne i}^{n}\left\{ \hat{b}(U_i,U_j;\theta _0) \right\} \\&-\frac{n-1}{(n-1)(n-2)}\sum _{i=1,i\ne l}^{n} \sum _{j=1, j\ne i,l}^{n}\left\{ \hat{b}(U_i,U_j;\theta _0) \right\} \\= & {} \frac{1}{n-1}\sum _{j=1,j\ne l}^{n}\left\{ \hat{b}(U_l,U_j;\theta _0) \right\} + \frac{1}{n-1}\sum _{i=1,i\ne l}^{n}\left\{ \hat{b}(U_i,U_l;\theta _0) \right\} \\&- \frac{1}{(n-1)(n-2)}\sum _{i=1,i\ne l}^{n} \sum _{j=1, j\ne i,l}^{n}\left\{ \hat{b}(U_i,U_j;\theta _0) \right\} \\= & {} 2\hat{W}_l(\theta _0)-\hat{V}_l(\theta _0). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \left| \hat{Q}_l(\theta _0)-Q_l(\theta _0) \right| = \left| 2\left[ \hat{W}_l(\theta _0) -W_l(\theta _0)\right] + \left[ V_l(\theta _0)-\hat{V}_l(\theta _0)\right] \right| . \end{aligned}$$
(12)

For the first part of (12), by Yang and Zhao (2012), we obtain that

$$\begin{aligned}&2|\hat{W}_l(\theta _0)-W_l(\theta _0)| \nonumber \\&\quad = \frac{2}{n-1}\left| \sum _{j=1,j\ne l}^{n}\{b(U_l,U_j;\theta _0)-\hat{b}(U_l,U_j;\theta _0)\} \right| \nonumber \\&\quad \le \frac{2}{n-1}\sum _{j=1,j\ne l}^{n}\sup _{l,j=1,\ldots ,n}\left| b(U_l,U_j;\theta _0)-\hat{b}(U_l,U_j;\theta _0) \right| \nonumber \\&\quad = o_p(1). \end{aligned}$$
(13)

Similarly, for the second part of (12), one has that

$$\begin{aligned}&|V_l(\theta _0)-\hat{V}_l(\theta _0)| \nonumber \\&\quad = \frac{1}{(n-1)(n-2)}\left| \sum _{i=1,i\ne l}^{n}\sum _{j=1,j\ne i, l}^{n} \{b(U_i,U_j;\theta _0)-\hat{b}(U_i,U_j;\theta _0)\}\right| \nonumber \\&\quad \le \frac{1}{(n-1)(n-2)}\sum _{i=1,i\ne l}^{n}\sum _{j=1,j\ne i, l}^{n}\sup _{i,j=1,\ldots ,n} \left| b(U_i,U_j;\theta _0)-\hat{b}(U_i,U_j;\theta _0) \right| \nonumber \\&\quad = o_p(1). \end{aligned}$$
(14)

Combining (12), (13) and (14), it leads to

$$\begin{aligned} |\hat{Q}_l(\theta _0)-Q_l(\theta _0)|=o_p(1). \end{aligned}$$
(15)

For any \(\alpha \in {R}^{p+1}\), by (15)

$$\begin{aligned}&\alpha ^T \left\{ \Pi _n(\theta _0)-\hat{\Pi }_n(\theta _0)\right\} \alpha \\&\quad = \frac{1}{n}\alpha ^T \left\{ \sum _{l=1}^{n}Q_l(\theta _0)Q_l^T(\theta _0)-\sum _{l=1}^{n}\hat{Q}_l(\theta _0)\hat{Q}_l^T(\theta _0) \right\} \alpha \\&\quad = \frac{1}{n}\alpha ^T \left\{ \sum _{l=1}^{n}Q_l(\theta _0)Q_l^T(\theta _0)- 2\sum _{l=1}^{n}\hat{Q}_l(\theta _0)Q_l^T(\theta _0) +\sum _{l=1}^{n}\hat{Q}_l(\theta _0)\hat{Q}_l^T(\theta _0) \right. \\&\qquad \left. + 2\sum _{l=1}^{n}\hat{Q}_l(\theta _0)Q_l^T(\theta _0) - 2\sum _{l=1}^{n}\hat{Q}_l(\theta _0)\hat{Q}_l^T(\theta _0) \right\} \alpha \\&\quad = \frac{1}{n}\sum _{l=1}^{n}\left[ \alpha ^T(Q_l(\theta _0)-\hat{Q}_l(\theta _0)) \right] ^2+ \frac{2}{n}\sum _{l=1}^{n}\alpha ^T\hat{Q}_l(\theta _0)(Q_l(\theta _0) -\hat{Q}_l(\theta _0))^T\alpha \\&\quad = o_p(1), \end{aligned}$$

as both \(|Q_l(\theta _0)|\) and \(|\hat{Q}_l(\theta _0)|\) are uniformly bounded, due to the boundness of \(|b(U_i,U_j;\theta _0)|\) and (15). Thus,

$$\begin{aligned} \hat{\Pi }_n(\theta _0)=\hat{\Pi }_n(\theta _0)-\Pi _n(\theta _0)+\Pi _n(\theta _0)\overset{P}{\longrightarrow }4\Gamma (\theta _0), \end{aligned}$$
(16)

and we finish the proof. \(\square \)

Lemma 3

Under the conditions of Theorem 1, \(\Vert \lambda (\theta _0) \Vert =O_p(n^{-1/2})\), where \(\Vert \cdot \Vert \) denotes the Euclidean norm.

Proof

As in (Owen (1990), p. 101), we let \(\lambda (\theta _0)=\rho \nu \) where \(\rho \ge 0\) and \(\Vert \nu \Vert =1\), and have

$$\begin{aligned} 0= & {} \Vert g(\rho \nu ) \Vert \\\ge & {} | \nu ^T g(\rho \nu )| \\= & {} \frac{1}{n}\left| \nu ^T \left[ \sum _{l=1}^{n}\frac{\hat{Q}_{l}(\theta _0)}{1+\rho \nu ^{T}\hat{Q}_{l}(\theta _0)} \right] \right| = \frac{1}{n}\left| \nu ^T \left[ \sum _{l=1}^{n}\hat{Q}_{l}(\theta _0)- \rho \sum _{l=1}^{n}\frac{\hat{Q}_{l}(\theta _0)\hat{Q}_{l}^T(\theta _0)\nu }{1+\rho \nu ^{T}\hat{Q}_{l}(\theta _0)} \right] \right| \\\ge & {} \frac{\rho }{n}\left| \nu ^T \sum _{l=1}^{n}\frac{\hat{Q}_{l}(\theta _0)\hat{Q}_{l}^{T}(\theta _0)}{1+\rho \nu ^{T}\hat{Q}_{l}(\theta _0)} \nu \right| - \frac{1}{n}\left| \nu ^T \sum _{l=1}^{n}\hat{Q}_{l}(\theta _0)\right| , \end{aligned}$$

where \(g(\cdot )\) has been defined by (3). It holds that

$$\begin{aligned} \frac{\rho }{n}\left| \nu ^T \sum _{l=1}^{n}\frac{\hat{Q}_{l}(\theta _0)\hat{Q}_{l}^{T}(\theta _0)}{1+\rho \nu ^{T}\hat{Q}_{l}(\theta _0)} \nu \right| \le \frac{1}{n}\left| \nu ^T \sum _{l=1}^{n}\hat{Q}_{l}(\theta _0)\right| . \end{aligned}$$

It is clear that \(n^{-1}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})=\hat{V}(\theta _0)=O_{p}(n^{-1/2})\) by (7), and \(\nu ^{T}\hat{\Pi }_{n}(\theta _{0})\nu = \nu ^{T}(\hat{\Pi }_{n}(\theta _{0})-\Gamma (\theta _{0}))\nu +\nu ^{T}\Gamma (\theta _{0})\nu = O_{p}(1) \) by Lemma 2. As shown in Lemma 2, \(\Vert \hat{Q}_l(\theta _0) \Vert \) is uniformly bounded by M for all i, thus \(1+\rho \nu ^{T}\hat{Q}_{l}(\theta _0) \le 1+\rho M \). We can obtain that

$$\begin{aligned} \frac{\rho \nu \hat{\Pi }_n\nu }{1+\rho M} \le \Vert \nu \Vert \cdot \left\| \frac{1}{n}\sum _{l=1}^{n} \hat{Q}_l(\theta _0) \right\| =O_p(n^{-1/2}). \end{aligned}$$

Hence,

$$\begin{aligned} \Vert \lambda (\theta _0)\Vert =\rho =O_p(n^{-1/2}). \end{aligned}$$
(17)

\(\square \)

Proof of Theorem 1

We will derive an asymptotic expression for \(\lambda ({\theta })\) as the root of \(g(\lambda ({\theta }))\) when \(\theta \) is replaced by its true value \(\theta _0\) as follows

$$\begin{aligned} 0=\,&g(\lambda (\theta _0))= \frac{1}{n} \sum _{l=1}^{n}\frac{\hat{Q}_{l}(\theta _0)}{1+\lambda (\theta _0)^{T}\hat{Q}_{l}(\theta _0)} \\ =\,&\frac{1}{n}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _0)\left[ 1-\lambda ({\theta _0})^T \hat{Q}_{l}(\theta _0) + \frac{\lambda ({\theta _0})^T\hat{Q}_{l}(\theta _0)\hat{Q}_{l}^{T}(\theta _0)\lambda ({\theta _0})}{1+\lambda ({\theta _0})^T \hat{Q}_{l}(\theta _0)} \right] \\ =\,&\hat{V}(\theta _0)-\lambda ({\theta _0})^T\hat{\Pi }_n(\theta _0) +\frac{1}{n}\sum _{l=1}^{n}\frac{\hat{Q}_{l}(\theta _0)\lambda ({\theta _0})^T\hat{Q}_{l}(\theta _0)\hat{Q}_{l}^{T}(\theta _0)\lambda ({\theta _0})}{1+\lambda ({\theta _0})^T \hat{Q}_{l}(\theta _0)}. \end{aligned}$$

Since \(\Vert \hat{Q}_l(\theta _0)\Vert \) is uniformly bounded by M, and \(\Vert \hat{\Pi }_n(\theta _0)\Vert \) is \(O_p(1)\). From Lemma 3, \(1+\lambda ({\theta _0})^T \hat{Q}_{l}(\theta _0)=O_p(1) \). Thus, the last term of the above equation can be written as

$$\begin{aligned} \left| \frac{1}{n} \sum _{l=1}^{n}\frac{\hat{Q}_{l}(\theta _0)\lambda ({\theta _0})^T\hat{Q}_{l}(\theta _0)\hat{Q}_{l}^{T}(\theta _0) \lambda ({\theta _0})}{1+\lambda ({\theta _0})^T \hat{Q}_{l}(\theta _0)} \right| = O_p(n^{-1}). \end{aligned}$$

Hence, we write

$$\begin{aligned} \lambda ({\theta _0})=\hat{\Pi }_n^{-1}(\theta _0)\hat{V}(\theta _0)+O_p(n^{-1}). \end{aligned}$$
(18)

By (18), \(l(\theta _0)\) can be expanded as

$$\begin{aligned} l(\theta _0)&= 2\sum _{l=1}^{n}\log \{1+\lambda ({\theta _0})^{T}\hat{Q}_{l}(\theta _0)\} \\&= 2\sum _{l=1}^{n} \left[ \lambda ({\theta _0})^{T}\hat{Q}_{l}(\theta _0)-\frac{1}{2} \lambda ({\theta _0})^{T}\hat{Q}_{l}(\theta _0)\hat{Q}_{l}^{T}(\theta _0)\lambda ({\theta _0}) +O((\lambda ({\theta _0})^{T}\hat{Q}_{l}(\theta _0))^3) \right] \\&= 2n\hat{V}^T(\theta _0)\hat{\Pi }_n^{-1}(\theta _0)\hat{V}(\theta _0) - n\hat{V}^T(\theta _0)\hat{\Pi }_n^{-1}(\theta _0)\hat{\Pi }_n(\theta _0)\hat{\Pi }_n^{-1}(\theta _0)\hat{V}(\theta _0)\nonumber \\&\quad +O_p(n^{-1/2}) \\&= \sqrt{n}\hat{V}^T(\theta _0) \cdot \hat{\Pi }_n^{-1}(\theta _0)\cdot \sqrt{n}\hat{V}(\theta _0)+o_p(1), \end{aligned}$$

since \(\Vert \hat{Q}_l(\theta _0)\Vert \) is uniformly bounded, \(\hat{V}(\theta _0)=O_p(n^{-1/2})\), and \(\Vert \lambda ({\theta _0})\Vert =O_p(n^{-1/2})\). Thus, combining the results of Lemmas 1 and 2, we obtain that

$$\begin{aligned} l(\theta _0)\overset{\mathfrak {D}}{\longrightarrow }\chi _{p+1}^{2}. \end{aligned}$$
(19)

\(\square \)

Proof of Theorem 2

The proof is along the lines of Zhang and Zhao (2013). Let \(\tilde{\theta }_{2}=\arg \inf _{\theta _{2}}l(\theta _{10},\theta _{2})\), \( \tilde{D}(\theta _{0})= \displaystyle \lim _{n \rightarrow \infty } n^{-2}\sum _{i=1}^{n}\sum _{j=1,j\ne i}^{n}w_{ij}(\theta _{0}) \tilde{\eta }_{ij}(\theta _{0})\tilde{\eta }_{ij}^{T}(\theta _{0})\), where \(\tilde{\eta }_{ij}^{T}(\theta _{0})\) is the partial derivative of \(\eta _{ij}^{T}(\theta _{0})\) with respect to \(\theta _2\), and \(\tilde{\Phi }(\theta _{0})=\tilde{D}^T(\theta _{0})\Pi ^{-1}(\theta _{0})\tilde{D}(\theta _{0})\). Then, following the similar arguments in Qin and Lawless (1994) and Fine et al. (1998), we have

$$\begin{aligned} \sqrt{n}(\tilde{\theta }_{2}-\theta _{20})= -\tilde{\Phi }(\theta _{0})^{-1}\tilde{D}(\theta _{0})^{T}\Pi (\theta _{0})^{-1}\frac{1}{\sqrt{n}}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})+o_{p}(1), \end{aligned}$$

and the Lagrange multiplier \(\lambda _2\) satisfying

$$\begin{aligned} \sqrt{n}\lambda _{2} = S_1 \frac{1}{\sqrt{n}} \sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})+o_{p}(1), \end{aligned}$$

where \(S_1=\Pi ^{-1}(\theta _{0})-\Pi ^{-1}(\theta _{0}) \tilde{D}(\theta _{0}) \tilde{\Phi }^{-1}(\theta _{0})\tilde{D}(\theta _{0})^{T}\Pi ^{-1}(\theta _{0})\). Although \(\hat{Q}_l\)’s are not independent of each other, which is different from situation in Qin and Lawless (1994), Lemmas 1 and 2 guarantee the convergence rate of \(\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})\) and boundness of \(\hat{Q}_{l}(\theta _{0})\). Then, similar to Zhang and Zhao (2013) and Theorem 1, one has that

$$\begin{aligned}&l^{*}(\theta _{10}) = \left\{ \frac{1}{\sqrt{n}}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})\right\} ^{T} S_1 \left\{ \frac{1}{\sqrt{n}}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})\right\} +o_{p}(1)\\&\quad \!=\! \left\{ \Pi ^{-1/2}(\theta _{0})\frac{1}{\sqrt{n}}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})\right\} ^{T} \left\{ I \!-\!\Pi ^{-1/2}(\theta _{0}) \tilde{D}(\theta _{0}) \tilde{\Phi }^{-1}(\theta _{0})\tilde{D}(\theta _{0})^{T}\Pi ^{-1/2}(\theta _{0}) \right\} \\&\qquad \times \left\{ \Pi ^{-1/2}(\theta _{0})\frac{1}{\sqrt{n}}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})\right\} +o_{p}(1) \\&\quad \equiv \left\{ \Pi ^{-1/2}(\theta _{0})\frac{1}{\sqrt{n}}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})\right\} ^{T} S_2\left\{ \Pi ^{-1/2}(\theta _{0})\frac{1}{\sqrt{n}}\sum _{l=1}^{n}\hat{Q}_{l}(\theta _{0})\right\} +o_{p}(1). \end{aligned}$$

Since \(S_2=I -\Pi ^{-1/2}(\theta _{0}) \tilde{D}(\theta _{0}) \tilde{\Phi }^{-1}(\theta _{0})\tilde{D}(\theta _{0})^{T}\Pi ^{-1/2}(\theta _{0})\) is a symmetric and idempotent matric with trace q, we have

$$\begin{aligned} l^{*}(\theta _{10}) \overset{\mathfrak {D}}{\longrightarrow } \chi _{q}^{2}. \end{aligned}$$

\(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, H., Liu, S. & Zhao, Y. Jackknife empirical likelihood for linear transformation models with right censoring. Ann Inst Stat Math 68, 1095–1109 (2016). https://doi.org/10.1007/s10463-015-0528-7

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-015-0528-7

Keywords

Navigation