Consistency properties for the wavelet estimator in nonparametric regression model with dependent errors

In this paper, we establish the pth mean consistency, complete consistency, and the rate of complete consistency for the wavelet estimator in a nonparametric regression model with m-extended negatively dependent random errors. We show that the best rates can be nearly O(n−1/3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O(n^{-1/3})$\end{document} under some general conditions. The results obtained in the paper markedly improve and extend some corresponding ones to a much more general setting.

the complete consistency of the weighted estimator in (1) with extended negatively dependent (END) errors; Chen et al. [10] obtained the mean consistency, strong consistency, and complete consistency of the weighted estimator in model (1) with martingale difference errors.
Compared to these smooth methods, the wavelet method has the advantage of estimating nonsmooth functions. Therefore, in this paper, we concentrate on the wavelet estimation of unknown function g in model (1). Firstly, we recall two necessary concepts. Definition 1.1 A scale function ϕ is q-regular (q ∈ Z) if for any l ≤ q and integer k, we have | d l ϕ dx l | ≤ C k (1 + |x|) -k , where C k is a generic constant depending only on k.
It is well known that the wavelet method is a powerful tool in many fields such as applied mathematics, physics, computer science, signal and information processing, image processing, and so on. Therefore, since Antoniadis et al. [11] introduced this method to nonparametric regression model, many results were established. For example, Xue [12] investigated the rates of strong convergence for the wavelet estimator under completed and censored data; Sun and Chai [13] established the weak consistency, strong consistency, and the convergence rate for the wavelet estimator under stationary α-mixing samples; Li et al. [14] obtained the weak consistency and the rate of uniformly asymptotic normality for wavelet estimator with associated samples; Liang [15] established the asymptotic normality for wavelet estimator in heteroscedastic model with α-mixing samples; Li et al. [16] obtained the Berry-Esseen bounds for wavelet estimator in a regression model with linear process errors generated by ϕ-mixing sequences; Tang et al. [17] studied the asymptotic normality for wavelet estimator with asymptotically negatively associated random errors; Ding et al. [18] investigated the mean consistency, complete consistency, and the rate of complete consistency for wavelet estimator with END random errors; Ding et al. [19] established the Berry-Esseen bound of wavelet estimators in nonparametric regression model with asymptotically negatively associated errors; Ding and Chen [20] studied the asymptotic normality of the wavelet estimators in heteroscedastic semiparametric regression model with ϕ-mixing errors, and so on.
In this work, we further study the consistency properties of wavelet estimator (2) under a more general dependence structure. Now we recall some concepts of dependent random variables. Definition 1.3 A finite collection of random variables X 1 , X 2 , . . . , X n is said to be END if there exists a constant M > 0 such that for all real numbers x 1 , x 2 , . . . , x n . An infinite sequence {X n , n ≥ 1} is said to be END if its every finite subcollection is END.
An array {X ni , 1 ≤ i ≤ n, n ≥ 1} of random variables is said to be rowwise END if for every n ≥ 1, The concept of END random variables was introduced by Liu [21]. It shows that the END structure can reflect not only negative dependence structures, but also some positive ones. It has been proved that the END structure contains NA, negatively superadditive dependence (NSD), and negatively orthant dependence (NOD), the concepts of which were introduced by Joag-Dev and Proschan [22], Hu [23], and Lehmann [24], respectively. Therefore there is an increasing attention to this dependence structure, and many results were successfully established since this concept was raised. For more detail, we refer the readers to Liu [25], Shen [26], Wang and Wang [27], Wu and Guan [28], Shen et al. [29], Yang et al. [30], and Wu et al. [31], among others.
Wang et al. [32] introduced the following concept of m-extended negatively dependent (m-END) random variables.

Definition 1.4
Let m ≥ 1 be a fixed integer. A sequence {X n , n ≥ 1} of random variables is said to be m-END if for any n ≥ 2 and any i 1 , i 2 , . . . , i n such that |i ki j | ≥ m for all 1 ≤ k = j ≤ n, we have that X i 1 , X i 2 , . . . , X i n are END.
The concept of m-END random variables is a natural extension of END random variables. It degenerates to END if we take m = 1. Hence m-END is a more general structure, and it is of interest to investigate this dependence structure. There are already some papers investigating m-END random variables. For example, Xu et al. [33] studied the mean consistency of the weighted estimator in a nonparametric regression model based on m-END random errors; Wang et al. [34] obtained the complete and complete moment convergence for partial sums of m-END random variables and gave their applications to the EV regression models.
We will use the following concept of stochastic domination. Definition 1.5 A sequence {X n , n ≥ 1} of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that for all x ≥ 0 and n ≥ 1. An array {X ni , 1 ≤ i ≤ n, n ≥ 1} of rowwise random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that for all x ≥ 0 and 1 ≤ i ≤ n, n ≥ 1.
In this paper, we further investigate the consistency properties of estimator (2) in the nonparametric regression model (1) based on m-END random errors. We establish the pth mean consistency, complete consistency, and the rate of complete consistency under some general conditions. These results improve and extend the corresponding ones of Li et al. [14] and Ding et al. [18]. Moreover, the method used here is different from those of Li et al. [14] and Ding et al. [18].
In this paper, the symbols C, c 1 , c 2 , . . . represent generic positive constants whose values may vary in different places. Denote x + = max{0, x} and x -= max{0, -x}. By I(A) we denote the indicator function of an event A.
The paper is organized as follows. The main results are stated in Sect. 2. Some important lemmas are presented in Sect. 3. The proofs of the main results are provided in Sect. 4.

Main results
The following assumptions are needed in the main results.
We now present our main results. The first one is the mean consistency of order p for estimator (2).
is an array of zero mean m-END random variables with sup 1≤i≤n,n≥1 E|ε ni | p < ∞ for some p > 1. If 2 k → ∞ and 2 k /n → 0 as n → ∞, then for any t ∈ [0, 1], Remark 2.2 Li et al. [14] obtained the weak consistency for (2) with NA random errors. They required 2 k = O(n 1/3 ) and sup 1≤i≤n,n≥1 E|ε ni | p < ∞ for some p > 3/2. Ding et al. [18] extended the result of Li et al. [14] to the pth mean consistency with END random errors under the moment condition sup 1≤i≤n,n≥1 E|ε ni | p < ∞ for some p ≥ 2. It is obvious that Theorem 2.1 markedly relaxes the choice of 2 k and weakens the moment condition. Hence Theorem 2.1 improves and extends the results of Ding et al. [18] and Li et al. [14] to m-END random errors.
The next theorem is about the complete consistency of the wavelet estimator based on m-END random errors.
is an array of zero mean m-END random variables stochastically dominated by a random variable ε with E|ε| 1+1/α < ∞. Then for any t ∈ [0, 1], Remark 2.3 Ding et al. [18] obtained the complete consistency of wavelet estimator with END random errors, in which the conditions 2 k = O(n 1/3 ) and E|ε| 4 < ∞ are required. Note that even if we choose 2 k = O(n 1/3 ) in Theorem 2.2, the moment condition is only required to be E|ε| 5/2 < ∞. Therefore Theorem 2.2 improves and extends the corresponding result of Ding et al. [18] markedly from END random errors to m-END random errors.

Some important lemmas
In this section, we state some lemmas, which will be used in proving our main results. The first one is a basic property of m-END random variables, which can be seen in Wang et al. [32]. The following lemma is about the Marcinkiewicz-Zygmund-type inequality and Rosenthal-type inequality for m-END random variables proved by Xu et al. [33].

Lemma 3.2 Let
{X n , n ≥ 1} be a sequence of m-END random variables with EX n = 0 and E|X n | p < ∞ for all n ≥ 1 and some p ≥ 1. Then

where c m,p and d m,p depend only on m and p.
The following lemma is due to Antoniadis et al. [11].

Lemma 3.4 Under Assumptions (H 1 )-(H 3 ), we have
The following lemma is proved by Definition 1.5 and integration by parts; see Wu [35] or Shen et al. [36] for detailed proofs.

Proofs of main results
Proof of Theorem 2.1 By (1) and (2) it follows that for any t ∈ (0, 1), Since γ > 0 and 2 k → ∞ as n → ∞, by Lemma 3.3 it follows that Hence we only need to prove We will assume without loss of generality that

then by Lemmas 3.2 and 3.4 we obtain that
If p > 2, then from sup 1≤i≤n,n≥1 E|ε ni | p < ∞ noticing by the Jensen inequality that sup 1≤i≤n,n≥1 Eε 2 ni < ∞, we also obtain by Lemmas 3.2 and 3.4 that The proof is finished. (3) and (4) it follows that to complete the proof, we only need to show that for any > 0,

Proof of Theorem 2.2 By
In view of Lemma 3.4(i) and 2 k /n = O(n -α ), we may assume without loss of generality that 0 < A ni E k (t, s) ds ≤ n -α . For fixed 1 ≤ i ≤ n, n ≥ 1, define By Lemma 3.1 it follows that {X ni , 1 ≤ i ≤ n} are still m-END. We can easily check that From 0 < A ni E k (t, s) ds ≤ n -α and Lemmas 3.4 and 3.5 it follows that To prove that I 2 < ∞, we first show that n i=1 EX ni → 0 as n → ∞. Indeed, from Eε ni = 0, 0 < A ni E k (t, s) ds ≤ n -α , and Lemmas 3.4 and 3.5 it follows that which implies that | n i=1 EX ni | < /2 for all n large enough. Hence by the Markov, C r , and Jensen inequalities and by Lemma 3.2 we obtain that for q > 2/α, By the C r inequality, Definition 1.5, and Lemma 3.5 it follows that Similarly to the proof of I 1 < ∞, we easily obtain For I 31 , we easily check that Similarly to I 32 < ∞, we have I 312 < ∞. Now we turn to prove I 311 < ∞. Indeed, noting that q > 2/α > 1 + 1/α, by Lemma 3.4 we obtain that Therefore we have proved I 3 < ∞. Finally, we will show that I 4 < ∞. Observing that |X ni | ≤ | A ni E k (t, s) ds ε ni | and E|ε| 1+1/α < ∞ implies Eε 2 < ∞, by Lemmas 3.4-3.5 and q > 2/α we obtain that The proof is complete.

Proof of Theorem 2.3 Note that by (H 1 ) and Lemma 3.3 we have
Hence, in view of (3), we only need to prove that for any > 0, We also assume without loss of generality that 0 < A ni E k (t, s) ds ≤ n -α . Define for each 1 ≤ i ≤ n, n ≥ 1, Similarly to the proof of I 1 < ∞, by 0 < A ni E k (t, s) ds ≤ n -α , Lemmas 3.4-3.5, and E|ε| 2+2/α < ∞ we have that By Eε ni = 0, 0 < A ni E k (t, s) ds ≤ n -α , and Lemmas 3.4-3.5 we have that Hence by the Markov, C r , and Jensen inequalities and by Lemma 3.2 we obtain that for q > max{2 + 2/α, 2/(αr)}, By the C r inequality and Lemma 3.5 we have that Similarly to J 1 < ∞, we have A ni E k (t, s) ds E|ε|I |ε| > n α/2 < ∞.
The proof is complete.