An extension on the rate of complete moment convergence for weighted sums of weakly dependent random variables

: The authors study the convergence rate of complete moment convergence for weighted sums of weakly dependent random variables without assumptions of identical distribution. Under the moment condition of E | X | α / (cid:0) log (1 + | X | ) (cid:1) α/γ − 1 < ∞ for 0 < γ < α with 1 < α ≤ 2, we establish the complete α -th moment convergence theorem for weighted sums of weakly dependent cases, which improves and extends the related known results in the literature.


Introduction and main result
Existing methods and algorithms appeared in some literatures assume that variables are independent, but it is not plausible. In many stochastic models and statistical applications, those variables involved are dependent. Hence, it is important and meaningful to extend the results of independent variables to dependent cases. One of these dependence structures is weakly dependent (i.e., ρ * -mixing orρ-mixing), which has attracted the concern by many researchers. Definition 1.1. Let {X n ; n ≥ 1} be a sequence of random variables defined on a probability space (Ω, F , P). For any S ⊂ N= {1, 2, . . .}, define F S = σ (X i , i ∈ S ). The set L 2 (F S ) is the class of all F -measureable random variables with the finite second moment. For some integer s ≥ 1, denote the mixing coefficient by where Noting that the above fact dist (S , T ) ≥ s denotes dist (S , T ) = inf {|i − j| : i ∈ S , j ∈ T } ≥ s. Obviously, 0 ≤ ρ * (s + 1) ≤ ρ * (s) ≤ 1 and ρ * (0) = 1. The sequence {X n ; n ≥ 1} is called ρ * -mixing if there exists s ∈ N such that ρ * (s) < 1. Clearly, if {X n ; n ≥ 1} is a sequence of independent random variables, then ρ * (s) = 0 for all s ≥ 1. ρ * -mixing seems similarly to another dependent structure: ρ-mixing, but they are quite different from each other. ρ * -mixing is also a wide range class of dependent structures, which was firstly introduced to the limit theorems by Bradley [4]. From then on, many scholars investigated the limit theory for ρ * -mixing random variables, and a number of important applications for ρ * -mixing have been established. For more details, we refer to [12,16,18,19,21,23,24] among others.
The concept of complete convergence was firstly given by Hsu and Robbins [9] as follows: A sequence of random variables {X n ; n ≥ 1} converges completely to a constant λ if ∞ n=1 P (|X n − λ| > ε) < ∞ for all ε > 0. By the Borel-Cantelli lemma, the above result implies that X n → λ almost surely (a.s.). Thus, the complete convergence plays a crucial role in investigating the limit theory for summation of random variables as well as weighted sums. Chow [8] introduced the following notion of complete moment convergence: Let {Z n ; n ≥ 1} be a sequence of random variables, and a n > 0, b n > 0, then the sequence {Z n ; n ≥ 1} is called to be the complete q-th moment convergence. It will be shown that the complete moment convergence is the more general version of the complete convergence, and is also much stronger than the latter (see Remark 2.1). According to the related statements of Rosalsky and Thành [14] as well as that of Thành [17], we recall the definition of stochastic domination as follows.
Definition 1.2. A sequence of random variables {X n ; n ≥ 1} is said to be stochastically dominated by a random variable X if for all x ≥ 0 and n ≥ 1, The concept of stochastic domination is a slight generalization of identical distribution. It is clearly seen that stochastic dominance of {X n ; n ≥ 1} by the random variable X implies E|X n | p ≤ E|X| p if the p-th moment of |X| exists, i.e. E|X| p < ∞.
As is known to us all, the weighted sums of random variables are used widely in some important linear statistics (such as least squares estimators, nonparametric regression function estimators and jackknife estimates). Based on this respect, many probability statisticians devote to investigate the probability limiting behaviors for weighted sums of random variables. For example, Bai and Cheng [3], Cai [5], Chen and Sung [6], Cheng et al. [7], Lang et al. [11], Peng et al. [13], Sung [15,16] and Wu [20] among others.
Recently, Li et al. [12] extended the corresponding result of Chen and Sung [6] from negatively associated random variables to ρ * -mixing cases by a total different method, and obtained the following theorem.
In addition, Huang et al. [10] proved the following complete α-th moment convergence theorem for weighted sums of ρ * -mixing random variables under some moment conditions. Theorem B. Let {X n ; n ≥ 1} be a sequence of ρ * -mixing random variables, which is stochastically dominated by a random variable X, let {a ni ; 1 ≤ i ≤ n, n ≥ 1} be an array of real constants such that n i=1 |a ni | α = O (n) for some 0 < α ≤ 2. Set b n = n 1/α log n 1/γ for some γ > 0. Assume further that It is interesting to find the optimal moment conditions for (1.5). Huang et al. [10] also posed a worth pondering problem whether the result (1.5) holds for the case α > γ under the almost optimal moment condition E|X| α / log (1 + |X|) α/γ−1 < ∞ ?
Mainly inspired by the related results of Li et al. [12], Chen and Sung [6] and Huang et al. [10], the authors will further study the convergence rate for weighted sums of ρ * -mixing random variables without assumptions of identical distribution. Under the almost optimal moment condition E|X| α / log (1 + |X|) α/γ−1 < ∞ for 0 < γ < α with 1 < α ≤ 2, a version of the complete α-th moment convergence theorem for weighted sums of ρ * -mixing random variables is established. The main result not only improves the corresponding ones of Li et al. [12], Chen and Sung [6], but also partially settles the open problem posed by Huang et al. [10]. Now, we state the main result as follows. Some important auxiliary lemmas and the proof of the theorem will be detailed in the next section.
Theorem 1.1. Let {X n ; n ≥ 1} be a sequence of ρ * -mixing random variables with EX n = 0, which is stochastically dominated by a random variable X, let {a ni ; 1 ≤ i ≤ n, n ≥ 1} be an array of real constants such that Throughout this paper, let I (A) be the indicator function of the event A and I(A, B) = I(A B). The symbol C always presents a positive constant, which may be different in various places, and a n = O (b n ) stands for a n ≤ Cb n .

Lemmas and proofs
To prove our main result of this paper, we need the following important lemmas. Lemma 2.1. (Utev and Peligrad [18]) Let p ≥ 2, {X n ; n ≥ 1} be a sequence of ρ * -mixing random variables with EX n = 0 and E|X n | p < ∞ for all n ≥ 1. Then there exists a positive constant C depending only on p, s and ρ * (s) such that The following one is a basic property for stochastic domination. For the details, one refers to Adler and Rosalsky [1] and Adler et al. [2], or Wu [22]. In fact, we can remove the constant C in those of Adler Consequently, E|X n | β ≤ E|X| β . Lemma 2.3. Under the conditions of Theorem 1. (2.5) Proof. By Definition 1.2, noting that Hence, (2.5) holds by (2.6)-(2.8).
Proof of Theorem 1.1. For any given ε > 0, observing that (2.9) By Theorem A of Li et al. [12] declared in the first section, we get directly I < ∞. In order to prove (1.5), it suffices to show that J < ∞.
Without loss of generality, assume that a ni ≥ 0. For all t ≥ 1 and 1 ≤ i ≤ n, n ∈ N, define It is easy to check that To prove J < ∞, we need only to show that Next, we prove that Observe that, E |a ni X| I |a ni X| > b n t 1/α = E |a ni X| I |a ni X| > b n t 1/α , |X| ≤ b n +E |a ni X| I |a ni X| > b n t 1/α , |X| > b n . (2.12) For 0 < γ < α and 1 < α ≤ 2, it is clearly shown that and (2.14) Thus, Then, (2.11) holds by the argumentation of (2.12)-(2.16).
Hence, for n sufficiently large, we have that max 1≤ j≤n j i=1 EY i ≤ b n t 1/α 2 holds uniformly for all t ≥ 1. Therefore, E|a ni X| 2 I |a ni X| ≤ b n t 1/α Based on the formula (2.2) of Lemma 2.2 in Li et al. [10], we get that Denoting t = x α , by (2.3) of Lemma 2.2, the Markov's inequality and Lemma 2.3, we also get that E|a ni X| α I (|a ni X| > b n ) Analogous to the argumentation of Lemma 2.3, it is easy to show that Hence, the desired result J 1 < ∞ holds by the above statements. The proof of Theorem 1.1 is completed.
Remark 2.1. Under the conditions of Theorem 1.1, noting that Since ε > 0 is arbitrary, it follows from (2.22) that the complete moment convergence is much stronger than the complete convergence. Compared with the corresponding results of Li et al. [12], Chen and Sung [6], it is worth pointing out that Theorem 1.1 of this paper is an extension and improvement of those of Li et al. [12], Chen and Sung [6] under the same moment condition. In addition, the main result partially settles the open problem posed by Huang et al. [10] for the case 0 < γ < α with 1 < α ≤ 2.

Conclusions
In this work, we consider the problem of complete moment convergence for weighted sums of weakly dependent (or ρ * -mixing) random variables. The main results of this paper are presented in the form of the main theorem and a remark as well as Lemma 2.3, which plays a vital role to prove the main theorem. The presented main theorem improves and generalizes the corresponding complete convergence results of Li et al. [12] and Chen and Sung [6].