Skip to main content
Log in

Empirical Likelihood-Based Inference for the Difference of Two Location Parameters Using Smoothed M-Estimators

  • Original Article
  • Published:
Journal of Statistical Theory and Practice Aims and scope Submit manuscript

Abstract

We consider one of the classical problems in statistics: the inference for the two-sample location problem. In this paper, we present a new empirical likelihood (EL) method for the difference of two smoothed M-estimators. To deal with additional nuisance scale parameters, we use the plug-in empirical likelihood, and we establish asymptotic properties of the new estimators. For the empirical study, we consider the important case of the smoothed Huber M-estimator. Our empirical results show that the new method is a competitive alternative to the classical procedures regarding inference about the difference of two location parameters. The software implementation for the new empirical likelihood method is based on the R package EL, which has been developed for related two-sample problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Cers E, Valeinis J (2011) EL: two-sample empirical likelihood. R package version 1.0

  2. Hall P, Welsh AH (1984) Convergence for estimates of parameters of regular variation. Ann Stat 12(3):1079–1084

    Article  Google Scholar 

  3. Hampel F, Henning C, Ronchetti EA (2011) A smoothing principle for the Huber and other location M-estimators. Comput Stat Data Anal 55(1):324–337

    Article  MathSciNet  Google Scholar 

  4. Heritier S, Cantoni E, Copt S, Victoria-Feser MP (2009) Robust methods in biostatistics. Wiley, New York

    Book  Google Scholar 

  5. Hjort NL, McKeague IW, Van Keilegom I (2009) Extending the scope of empirical likelihood. Ann Stat 37(3):1079–1111

    Article  MathSciNet  Google Scholar 

  6. Huber PJ (1964) Robust estimation of a location parameter. Ann Math Stat 35:73–101

    Article  MathSciNet  Google Scholar 

  7. Marazzi A (2002) Bootstrap tests for robust means of asymmetric distributions with unequal shapes. Comput Stat Data Anal 49:503–528

    Article  MathSciNet  Google Scholar 

  8. Maronna RA, Martin RD, Yohai VJ (2006) Robust statistics. Theory and methods. Wiley, New York

    Book  Google Scholar 

  9. Owen AB (1988) Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75(2):237–249

    Article  MathSciNet  Google Scholar 

  10. Owen AB (2001) Empirical likelihood. Chapman & Hall/CRC Press, New York

    Book  Google Scholar 

  11. Qin J, Lawless J (1994) Empirical likelihood and general estimating equations. Ann Stat 22(1):300–325

    Article  MathSciNet  Google Scholar 

  12. Qin Y, Zhao L (2000) Empirical likelihood ratio confidence intervals for various differences of two populations. J Syst Sci Complex 13(1):23–30

    MathSciNet  MATH  Google Scholar 

  13. Stigler SM (1977) Do robust estimators work with real data? Ann Stat 5(6):1055–1098

    Article  MathSciNet  Google Scholar 

  14. Valeinis J (2007) Confidence bands for structural relationship models. Ph. D. thesis, University of Goettingen, Goettingen

  15. Valeinis J, Cers E, Cielens J (2010) Two-sample problems in statistical data modelling. Math Modell Anal 15:137–151

    Article  MathSciNet  Google Scholar 

  16. Wasserman L (2006) All of nonparametric statistics. Springer, New York

    MATH  Google Scholar 

  17. Wilcox R (2005) Introduction to robust estimation and hypothesis testing, 2nd edn. Elsevier Academic Press, Burlington

    MATH  Google Scholar 

Download references

Acknowledgements

Janis Valeinis acknowledges partial support from the project 2009/0223/1DP/1.1.1.2.0/09/ APIA/VIAA/008 of the European Social Fund. We also want to thank Edmunds Cers for his programming assistance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mara Velina.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs

At first, we present one technical Lemma.

Lemma 4

Suppose \(1/3< \eta < 1/2\) and Assumption 3 is satisfied. Then

$$\begin{aligned} \lambda _1(\theta )=O_p(n_1^{-\eta }), \; \; \lambda _2(\theta )=O_p(n_2^{-\eta }) \end{aligned}$$

uniformly around \(\theta \in \lbrace \theta : |\theta -\theta _0| \le cn_1^{-\eta }| \rbrace\) , where c is some positive constant.

For the proof of Lemma 4, see [12].

Proof of Theorem 2

Denote \(\hat{\lambda }_1= \lambda _1(\Delta ,{{\hat{\theta }},} {\hat{\sigma }}_1,{\hat{\sigma }}_2)\), \(\hat{\lambda }_2=\lambda _2(\Delta ,{{\hat{\theta }}}, {\hat{\sigma }}_1,{\hat{\sigma }}_2)\).

First, we show that given the root \({\hat{\theta }}={\theta }(\Delta ,{\hat{\sigma }}_1,{\hat{\sigma }}_2)\) of (3.3), the following holds:

$$\begin{aligned} \sqrt{n_1}({\hat{\theta }} - \theta _0)&\xrightarrow {d}&N\left( 0,\dfrac{V_1 V_2(M_1^2+kM_2^2)}{c_1^2}\right) , \end{aligned}$$
(5.1)
$$\begin{aligned} {\hat{\lambda }}_1= & {} -k\left( \dfrac{M_2}{M_1}\right) {\hat{\lambda }}_2+o_p(n_1^{-1/2}), \end{aligned}$$
(5.2)
$$\begin{aligned} \sqrt{n_1} {\hat{\lambda }}_2&\,\xrightarrow {d}&N\left( 0,\dfrac{M_1^2}{kc_1}\right) , \end{aligned}$$
(5.3)

where

$$\begin{aligned} c_1=V_2M_1^2+kV_1M_2^2. \end{aligned}$$

Consider

$$\begin{aligned} Q_1 (\theta , \lambda _1, \lambda _2)&= {} \frac{1}{n_1}\sum _{i=1}^{n_1} \frac{\psi \left( \frac{X_i-\theta }{\sigma _1} \right) }{1+ \lambda _1 \psi \left( \frac{X_i-\theta }{\sigma _1} \right) }, \\ Q_2(\theta , \lambda _1, \lambda _2)&= {} \frac{1}{n_2} \sum _{j=1}^{n_2} \frac{\psi \left( \frac{Y_i-\Delta - \theta }{\sigma _2} \right) }{1+ \lambda _2 \psi \left( \frac{Y_i-\Delta - \theta }{\sigma _2} \right) },\\ Q_3(\theta , \lambda _1, \lambda _2)&= {} \lambda _1 \times \frac{1}{n_1}\sum _{i=1}^{n_1} \frac{\psi ' \left( \frac{X_i-\theta }{\sigma _1} \right) }{1+ \lambda _1 \psi \left( \frac{X_i-\theta }{\sigma _1} \right) } + \lambda _2 \times \frac{1}{n_1} \sum _{j=1}^{n_2} \frac{\psi ' \left( \frac{Y_i-\Delta - \theta }{\sigma _2} \right) }{1+ \lambda _2 \psi \left( \frac{Y_i-\Delta - \theta }{\sigma _2} \right) }. \end{aligned}$$

Then we have

$$\begin{aligned} Q_{i}( {\hat{\theta }}, {\hat{\lambda }}_1, {\hat{\lambda }}_2) = 0 \quad\text {for } i=1, 2, 3. \end{aligned}$$

By Taylor expansion, we have

$$\begin{aligned} 0&= Q_i( {\hat{\theta }}, {\hat{\lambda }}_1, {\hat{\lambda }}_2) = Q_i(\theta _0, 0, 0) + \frac{\partial Q_i (\theta _0, 0, 0)}{\partial \theta } ( {\hat{\theta }} - \theta _0) + \frac{\partial Q_i (\theta _0, 0, 0)}{\partial \lambda _1}{\hat{\lambda }}_1 \\&\quad +\,\frac{\partial Q_i (\theta _0, 0, 0)}{\partial \lambda _2} {\hat{\lambda }}_2 +O_p(n_1^{-2\eta }), \; i=1,2,3. \end{aligned}$$

Hence

$$\begin{aligned}&Q_i(\theta _0, 0, 0) + \frac{\partial Q_i (\theta _0, 0, 0)}{\partial \theta } ( {\hat{\theta }} - \theta _0) + \frac{\partial Q_i (\theta _0, 0, 0)}{\partial \lambda _1}{\hat{\lambda }}_1 \\&\quad + \frac{\partial Q_i (\theta _0, 0, 0)}{\partial \lambda _2} {\hat{\lambda }}_2 = o_p(n_1^{-1/2}),\; i=1,2,3. \end{aligned}$$

From conditions (C1)–(C3) of Assumption 3, it follows

$$\begin{aligned} \frac{\partial Q_1(\theta _0,0,0)}{\partial \theta }&\rightarrow M_1 \text {a.s.},&\frac{\partial Q_1(\theta _0,0,0)}{\partial \lambda _1}&\rightarrow -V_1 \text {a.s.},&\frac{\partial Q_1(\theta _0,0,0)}{\partial \lambda _2}&= 0,\\ \frac{\partial Q_2(\theta _0,0,0)}{\partial \theta }&\rightarrow M_2 \text {a.s.},&\frac{\partial Q_2(\theta _0,0,0)}{\partial \lambda _1}&=0,\text { }&\frac{\partial Q_2(\theta _0,0,0)}{\partial \lambda _2}&\rightarrow - V_2 \text {a.s.},\\ \frac{\partial Q_3(\theta _0,0,0)}{\partial \theta }&=0,\text {}&\frac{\partial Q_3(\theta _0,0,0)}{\partial \lambda _1}&\rightarrow M_1 \text {a.s.},&\frac{\partial Q_3(\theta _0,0,0)}{\partial \lambda _2}&\rightarrow k M_2 \text {a.s.} \end{aligned}$$

Thus

$$\begin{aligned} \left( \begin{matrix} {\hat{\theta }} - \theta _0\\ {\hat{\lambda }}_1 \\ {\hat{\lambda }}_2 \end{matrix} \right) = S^{-1} \left( \begin{matrix} Q_1 (\theta _0, 0 , 0) \\ Q_2 (\theta _0, 0 , 0)\\ 0 \end{matrix} \right) + o_p (n_1^{-1/2}), \end{aligned}$$

where

$$\begin{aligned} S= \left( \begin{matrix} M_1 &{}\quad -V_1 &{}\quad 0 \\ M_2 &{}\quad 0 &{}\quad -V_2 \\ 0 &{}\quad M_1 &{}\quad k M_2 \end{matrix} \right) \end{aligned}$$

and

$$\begin{aligned} S^{-1}= \frac{1}{c_1}\left( \begin{matrix} V_2 M_1 &{}\quad k V_1 M_2 &{}\quad V_1 V_2 \\ -k M_2^2 &{}\quad k M_1 M_2 &{}\quad V_2 M_1\\ M_1 M_2 &{}\quad -M_1^2 &{}\quad V_1 M_2 \end{matrix} \right) . \end{aligned}$$

Then, we have

$$\begin{aligned} {\hat{\theta }} - \theta _0 = \frac{1}{c_1}(V_2 M_1 Q_1(\theta _0,0,0) + k V_1 M_2 Q_2(\theta _0, 0, 0)) +o_p(n_1^{-1/2}), \\ {\hat{\lambda }}_1 = -\frac{k M_2}{c_1}(M_2 Q_1(\theta _0,0,0)-M_1 Q_2(\theta _0,0,0)) +o_p(n_1^{-1/2}), \\ {\hat{\lambda }}_2= \frac{M_1}{c_1}(M_2 Q_1(\theta _0,0,0)-M_1 Q_2(\theta _0,0,0))+o_p(n_1^{-1/2}). \end{aligned}$$

Note that according to Assumption 3 it holds that

$$\begin{aligned} \sqrt{n_1} \left( \begin{matrix} Q_1(\theta _0, 0, 0) \\ Q_2(\theta _0, 0, 0) \end{matrix} \right) \xrightarrow {d} N \left( 0, \left( \begin{matrix} V_1 &{}\quad 0\\ 0 &{}\quad k^{-1} V_2 \end{matrix} \right) \right) . \end{aligned}$$

The statements (5.1)–(5.3) follow.

By Assumption 1 (A2), \(\psi ^3 \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}} \right)\) is bound by some integrable function \(G_1(X)\). Thus \(\mathop {E} |\psi ( (X_i-{\hat{\theta }})/{\hat{\sigma }}_1) |^3\) exists, which is equivalent to

$$\begin{aligned} \sum P(|\psi ( (X_i-{\hat{\theta }})/{\hat{\sigma }}_1)|^3 > n_1) < \infty , \end{aligned}$$

see, for example, [9]. It follows by the Borel–Cantelli lemma that \(|\psi ( (X_i-{\hat{\theta }})/{\hat{\sigma }}_1)|< n_1^{1/3}\) with probability 1. This implies that

$$\begin{aligned} \max _{1 \le i \le n_1} |\psi ( (X_i-{\hat{\theta }})/{\hat{\sigma }}_1)| \le n_1^{1/3}. \end{aligned}$$
(5.4)

Thus, using Lemma 4 with \(\eta \in (1/3; 1/2)\) we have

$$\begin{aligned} \max _{1\le i\le {n_1}}\left| {\hat{\lambda }}_1 \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) \right| = O_p(n_1^{-\eta })o_p(n_1^{1/3})=o_p(1) \end{aligned}$$

and with \(\xi \in \left[ 0, {\hat{\lambda }}_1 \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) \right]\) we have by the law of large numbers that

$$\begin{aligned} \frac{1}{n_1} \sum _{i=1}^{n_1} \frac{\psi ^3 \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) }{(1+\xi )^3}=O_p(1). \end{aligned}$$

Thus, the following holds

$$\begin{aligned} \frac{n_1}{3} {\hat{\lambda }}_1^3 \frac{1}{n_1} \sum _{i=1}^{n_1} \frac{\psi ^3 \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) }{(1+\xi )^3} = O(n_1)O_p(n_1^{-3\eta })O_p(1) = O_p(n^{-3\eta +1})=o_p(1). \end{aligned}$$

A similar argument can be made for \({\hat{\lambda }}_2\). Then, using Taylor expansion for \(\log (1+x)\), we have

$$\begin{aligned} \log R(\Delta _0,{\hat{\theta }},{\hat{\sigma }}_1, {\hat{\sigma }}_2)= & {} -\sum _{i=1}^{n_1} \log \left( 1+ {\hat{\lambda }}_1 \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) \right) -\sum _{j=1}^{n_2} \log \left( 1+ {\hat{\lambda }}_2 \psi \left( \frac{Y_j-\Delta _0 - {\hat{\theta }}}{{\hat{\sigma }}_2} \right) \right) \nonumber \\= & {} -n_1 {\hat{\lambda }}_1 S_{1x} ({\hat{\theta }})+ \frac{n_1}{2}{\hat{\lambda }}_1^2 S_{2x}({\hat{\theta }}) -n_2 {\hat{\lambda }}_2 S_{1y}({\hat{\theta }})+ \frac{n_2}{2} {\hat{\lambda }}_2^2 S_{2y}({\hat{\theta }}) + o_p(1), \end{aligned}$$
(5.5)

where

$$\begin{aligned} S_{1x}({\hat{\theta }})&=\frac{1}{n_1}\sum _{i=1}^{n_1}\psi \left( \frac{X-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) ,&S_{2x}({\hat{\theta }})&=\frac{1}{n_1}\sum _{i=1}^{n_1} \psi ^2 \left( \frac{X-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) ,\\ S_{1y}({\hat{\theta }})&=\frac{1}{n_2}\sum _{j=1}^{n_2} \psi \left( \frac{Y-\Delta _0 - {\hat{\theta }}}{{\hat{\sigma }}_2} \right) ,&S_{2y}({\hat{\theta }})&=\frac{1}{n_2}\sum _{j=1}^{n_2} \psi ^2 \left( \frac{Y-\Delta _0 - {\hat{\theta }}}{{\hat{\sigma }}_2} \right) . \end{aligned}$$

From (3.2), we have

$$\begin{aligned} 0&= {} \sum _{i=1}^{n_1} \frac{\psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) }{1+ \lambda _1 \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) } = \frac{1}{n_1} \sum _{i=1}^{n_1} \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) \left( 1 - {\hat{\lambda }}_1 \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) + \frac{{\hat{\lambda }}_1^2 \psi ^2 \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) }{1 + {\hat{\lambda }}_1 \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) } \right) \\&= {} S_{1x}({\hat{\theta }}) - {\hat{\lambda }}_1 S_{2x}({\hat{\theta }}) + \frac{1}{n_1} \sum _{i=1}^{n_1} \frac{{\hat{\lambda }}_1^2 \psi ^3 \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) }{1 + {\hat{\lambda }}_1 \psi \left( \frac{X_i-{\hat{\theta }}}{{\hat{\sigma }}_1} \right) }. \end{aligned}$$

The absolute value of the last term is bounded by

$$\begin{aligned} \frac{1}{n_1} \sum _{i=1}^{n_1} |\psi ^3 ((X_i-{\hat{\theta }})/{\hat{\sigma }}_1)| |{\hat{\lambda }}_1|^2 |1 + {\hat{\lambda }}_1 \psi ((X_i-{\hat{\theta }})/{\hat{\sigma }}_1)|^{-1} = O(1)O_p(n_1^{-2 \eta })O_p(1)=O_p(n_1^{-2\eta }). \end{aligned}$$

Thus, it follows (using a similar argument for \({\hat{\lambda }}_2\)) that

$$\begin{aligned} S_{1x}({\hat{\theta }})= {\hat{\lambda }}_1 S_{2x}({\hat{\theta }}) + O_p(n_1^{-2\eta }),\;\;\; S_{1y}({\hat{\theta }})={\hat{\lambda }}_2 S_{2y} ({\hat{\theta }}) + O_p(n_1^{-2\eta }). \end{aligned}$$

Hence from (5.5), we have

$$\begin{aligned} -2 \log R(\Delta _0,{\hat{\theta }},{\hat{\sigma }}_1, {\hat{\sigma }}_2)= n_1 {\hat{\lambda }}_1^2 S_{2x}({\hat{\theta }})+ n_2 {\hat{\lambda }}_2^2 S_{2y} ({\hat{\theta }}) + o_p(1). \end{aligned}$$
(5.6)

From condition (C3) of Assumption 3, we have

$$\begin{aligned} S_{2x}({\hat{\theta }})=V_1 + o_p(1), \;\;\; S_{2y}({\hat{\theta }})=V_2 + o_p(1), \end{aligned}$$

and using (5.2)

$$\begin{aligned} -2 \log R(\Delta _0,{\hat{\theta }},{\hat{\sigma }}_1, {\hat{\sigma }}_2)&= {} n_1 {\hat{\lambda }}_1^2 S_{2x}({\hat{\theta }})+ n_2 {\hat{\lambda }}_2^2 S_{2y} ({\hat{\theta }}) + o_p(1)\\&= {} n_1 k^2 \frac{M_2^2}{M_1^2} {\hat{\lambda }}_2^2 V_1 + n_2 {\hat{\lambda }}_2^2 V_2 + o_p(1)\\&= {} k \left[ \sqrt{n_1} {\hat{\lambda }}_2 \right] ^2 \left( \frac{kV_1 M_2^2 + V_2 M_1^2}{M_1^2}\right) + o_p(1). \end{aligned}$$

Using (5.3), we have

$$\begin{aligned} \sqrt{n_1} {\hat{\lambda }}_2 \xrightarrow {d} N \left( 0, \frac{M_1^2}{k(V_2 M_1^2 + kV_1 M_2^2)}\right) . \end{aligned}$$

Then

$$\begin{aligned} -2 \log R(\Delta _0,{\hat{\theta }},{\hat{\sigma }}_1, {\hat{\sigma }}_2) \xrightarrow {d} \chi ^2_1, \end{aligned}$$

which proves Theorem 2. \(\square\)

Proof of Lemma 1

We will present the proof only for the sample X, as for Y the result can be obtained similarly. Condition (A2) of Assumption 1 states that \(\psi '\) is bounded by some integrable function; thus, the expectation exists and condition (C1) of Assumption 3 holds by the law of large numbers. To prove (C2) and (C3), we follow the technique used in [8, Section 10.6] to establish the asymptotic distribution of the location M-estimates with a preliminary scale. Denote \(u_i=X_i-\theta _0\) and \({\hat{\sigma }}_1=\sigma _1^0+\delta\). Expand \({{\psi }} (u_i/{\hat{\sigma }}_1)\) to the second-order Taylor series around \(\theta _0\):

$$\begin{aligned} {{\psi }} \bigg ( \frac{u_i}{{\hat{\sigma }}_1} \bigg )={{\psi }} \bigg ( \frac{u_i}{\sigma _1^0+\delta } \bigg )\approx {{\psi }}\bigg (\frac{u_i}{\sigma _1^0} \bigg )+ \frac{u_i}{\sigma _1^0}{{\psi }}' \bigg (\frac{u_i}{\sigma _1^0} \bigg ) \bigg (1- \frac{\sigma _1^0}{{\hat{\sigma }}_1} \bigg ). \end{aligned}$$

Summing over i and dividing by \(\sqrt{n_1}\), we obtain

$$\begin{aligned} \frac{1}{\sqrt{n_1}} \sum _{i=1}^{n_1} {{\psi }} \bigg (\frac{u_i}{{\hat{\sigma }}_1}\bigg )=\sqrt{n_1}A_{n_1} + \sqrt{n_1}B_{n_1} \bigg (1- \frac{\sigma _1^0}{{\hat{\sigma }}_1} \bigg ), \end{aligned}$$
(5.7)

where

$$\begin{aligned} A_{n_1} =\frac{1}{n_1} \sum _{i=1}^{n_1} {{\psi }} \bigg (\frac{u_i}{\sigma _1^0} \bigg ), \, \, \, B_{n_1}=\frac{1}{n_1} \sum _{i=1}^{n_1} \bigg ( \frac{u_i}{\sigma _1^0} \bigg ) {{\psi }}'\bigg ( \frac{u_i}{\sigma _1^0} \bigg ). \end{aligned}$$

\(E \psi (u_i/\sigma _1^0) = 0\) by the definition of the M-estimator, thus \(\sqrt{n_1} A_{n_1} \xrightarrow {d} N(0,V_1)\) by (B2). According to (B3), \(\sqrt{n_1} B_{n_1}\) tends to a normal distribution by the central limit theorem, and since \(\big (1- \frac{\sigma _1^0}{{\hat{\sigma }}_1} \big ) \rightarrow 0\) by condition (B1), the second term in the right-hand side of (5.7) tends to zero by Slutsky’s lemma. Hence, we obtain (C2).

Now, we expand \({{\psi }} (u_i/ {\hat{\sigma }}_1)\) around \(\theta _0\):

$$\begin{aligned} {{\psi }}^2 \bigg ( \frac{u_i}{\sigma _1^0+\delta } \bigg )= {{\psi }}^2 \bigg (\frac{u_i}{\sigma _1^0} \bigg )+ 2{{\psi }} \bigg (\frac{u_i}{\sigma _1^0} \bigg ) {{\psi }}' \bigg (\frac{u_i}{\sigma _1^0} \bigg ) \bigg ( \frac{u_i}{\sigma _1^0} \bigg ) \bigg (1- \frac{\sigma _1^0}{{\hat{\sigma }}_1} \bigg ). \end{aligned}$$

Summing over i and dividing by \(n_1\),

$$\begin{aligned} \frac{1}{n_1} \sum _{i=1}^{n_1}{{\psi }}^2 \bigg ( \frac{u_i}{\sigma _1^0+\delta } \bigg )= C_{n_1} + 2D_{n_1} \bigg (1- \frac{\sigma _1^0}{{\hat{\sigma }}_1} \bigg ), \end{aligned}$$

where

$$\begin{aligned} C_{n_1}=\frac{1}{n_1} \sum _{i=1}^{n_1} {{\psi }}^2 \bigg ( \frac{u_i}{\sigma _1^0} \bigg ), \text {and } D_{n_1}=\frac{1}{n_1} \sum _{i=1}^{n_1} {{\psi }} \bigg (\frac{u_i}{\sigma _1^0} \bigg ) {{\psi }}' \bigg (\frac{u_i}{\sigma _1^0} \bigg ) \bigg ( \frac{u_i}{\sigma _1^0} \bigg ). \end{aligned}$$

By the central limit theorem and (B2), \(C_{n_1}\) tends to \(V_1\), \(D_{n_1}\) tends to a constant by assumption (B4), and \(\big (1- \frac{\sigma _1^0}{{\hat{\sigma }}_1} \big ) \rightarrow 0\). Hence we obtain (C3). \(\square\)

Proof of Lemma 3

First, we verify that Assumption 1 condition (A2) holds. The derivative \({\tilde{\psi }}_k'\) is continuous due to the general smoothing principle of M-estimators established in (2.5). Next, \(0 \le {\tilde{\psi }}_k'(x) \le 1\) and \(0 \le {\tilde{\psi }}_k^3(x) \le k^3\), thus they are bounded.

Now, we verify that conditions of Assumption 2 hold. (B1) holds for MAD, \({\hat{\sigma }}_1=\text {MAD}=\mathop {Med}\{|X-\mathop {Med}(X)|\}\) under mild (smoothness) conditions on the underlying distribution F (see for example [2]). (B2) holds because \({\tilde{\psi }}_k\) with \(k<\infty\) is a bounded \(\psi\)-function. For \(F_1\) symmetric, \(\theta _0\) coincides with the center of symmetry and, since \({\tilde{\psi }}_k\) is odd, (B3) holds. Next, as \({\tilde{\psi }}'_k(x)=0\) for \(|x|>k\), for \(F_1\) symmetric (B4) is an expectation of an even and bounded function; hence, it is finite. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Velina, M., Valeinis, J. & Luta, G. Empirical Likelihood-Based Inference for the Difference of Two Location Parameters Using Smoothed M-Estimators. J Stat Theory Pract 13, 34 (2019). https://doi.org/10.1007/s42519-019-0037-8

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s42519-019-0037-8

Keywords

Mathematics Subject Classification

Navigation