Complete convergence for weighted sums of pairwise independent random variables

Abstract In the present paper, we have established the complete convergence for weighted sums of pairwise independent random variables, from which the rate of convergence of moving average processes is deduced.


Introduction
In this paper we are interested in the complete convergence for weighted sums of pairwise independent random variables. First let us recall some definitions and known results.

Complete convergence
The following concept of complete convergence of a sequence of random variables, which plays an important role in limit theory of probability, was introduced firstly by Hsu and Robbins [1]. A random sequence fX n ; n 1g is said to converge completely to the constant C (write X n ! C completely) if 1 X nD1 P .jX n C j > "/ < 1 for all " > 0: In view of the Borel-Cantelli Lemma, this implies that X n ! C almost surely (a.s.). For the case of i.i.d. random variables, Hsu and Robbins [1] proved that the sequence of arithmetic means of the random variables converges completely to the expected value if the variance of the summands is finite. Somewhat later, Erdös [2] proved the converse. These results are summarized as follows.
Hsu-Robbins-Erdös strong law. Let fX n ; n 1g be a sequence of i.i.d. random variables with mean zero and set S n D P n iD1 X i , then EX 2 1 < 1 is equivalent to the condition that 1 The result of Hsu-Robbins-Erdös strong law is a fundamental theorem in probability theory and was intensively investigated in several directions by many authors in the past decades. One of the most important results is Baum and Katz [3] strong law. Baum and Katz strong law. Let˛p 1, p > 2 and let fX n g be a sequence of i.i.d. random variables and EjX 1 j p < 1. If 1 2 <˛Ä 1, assume that EX 1 D 0. Then Baum and Katz strong law bridges the integrability of summands and the rate of convergence in the Marcinkiewicz-Zygmund strong law of large numbers. In general, the main tools to prove the complete convergence of some random variables are based on the moment inequality or the exponential inequality. However, for some dependent sequences (such as pairwise independent sequence, pairwise negatively dependent sequence), whether these inequalities hold was not known. Recently, Bai et al. [4] obtained the following excellent result for the maximum partial sums of pairwise independent random variables. 4]). Let 1 Ä p < 2 and let fX n ; n 1g be a sequence of pairwise i.i.d. random variables. Then EX 1 D 0 and EjX 1 j p < 1 if and only if for all " > 0 It is well known that the analysis of weighted sums plays an important role in the statistics, such as jackknife estimate, nonparametric regression function estimate and so on. Many authors considered the complete convergence of the weight sums of random variables. Thrum [5] studied the almost sure convergence of weighted sums of i.i.d. random variables; Li et al. [6] obtained complete convergence of weighted sums without identically distributed assumption. Liang and Su [7] extended the the results of Thrum [5] and Li et al. [6], and showed the complete convergence of weighted sums of negatively associated sequence. Beak [8] discussed the almost sure convergence for weighted sums of pairwise independent random variables. Huang et al. [9], Shen et al. [10] studied the complete convergence theorems for weighted sums of -mixing random variables. Miao et al. [11] established some results of complete convergence for martingales and under some uniform mixing conditions, the sufficient and necessary condition of the convergence of the martingale series was established. For the negatively orthant dependent random variables, Gan and Chen [12] discussed the complete convergence of weight sums, and for some special weighted sums, Chen and Sung [13] gave necessary and sufficient conditions for the complete convergence. Deng et al. [14], Zhao et al. [15] presented some results on complete convergence for weighted sums of random variables satisfying the Rosenthal type inequality. Xue et al. [16], Wang et al. [17], Deng et al. [18] studied the complete convergence for weighted sums of negatively superadditive-dependent random variables. Qiu and Chen [19] obtained the complete convergence for the weighted sums of widely orthant dependent random variables. Wu [20], Jabbari [21], Zhang et al. [22] gave the complete convergence for weighted sums of pairwise negative quadrant dependent random variables. For linearly negative quadrant dependent random variables, Choi et al. [23] established the complete convergence of weight sums. Baek and Park [24], Baek et al. [25] gave the complete convergence of arrays of rowwise negatively dependent random variables, and Qiu et al. [26] derived a general result for the complete convergence.
In the present paper, we shall study the sufficient conditions which make the following complete convergence of weighted sums of pairwise independent random variables hold where fa ni ; i 1; n 1g is an array of constants and t is some parameter which will be defined in the main results.

Stochastic domination
A sequence fX n ; n 1g of random variables is said to be stochastically dominated by a random variable X if there exists a positive constant C such that for all x 0 and n 1. This dominated condition means weakly dominated, where weak refers to the fact that domination is distributional. In [27], Gut introduced a weakly mean dominated condition. We say that the random variables fX n ; n 1g are weakly mean dominated by the random variable X , where X is possibly defined on a different space if for some C > 0, for all x 0 and n 1. It is clear that if X dominates the sequence fX n ; n 1g in the weakly dominated sense, then it also dominates the sequence in the weakly mean dominated sense. Furthermore, Gut [27] gave an example to show that the condition (3) is weaker than the above condition (2).
Our main results are stated in Section 2 and the proofs are given in Section 3. Throughout this paper, let C denote a positive constant, which may take different values whenever it appears in different expressions, and I. / stand for the indicator function.

Main results
In each situation studied, we assume that P 1 iD1 a ni X i is finite a.s., which implies that P 1 iD1 a ni X i converges a.s. If t < 1, then (1) holds obviously and hence it is of interest only for t 1.
Theorem 2.1. Let fX n ; n 1g be a sequence of random variables which are stochastically dominated by the random variable X (i.e., the inequality (2) holds) satisfying where p.t CˇC 1/ > 0 and p > 0. Let fa ni ; i 1; n 1g be a bounded array of real numbers satisfying (2) If 1 Ä p.t CˇC 1/ < 2, fX n ; n 1g is a sequence of pairwise independent random variables and EX n D 0, then (5) holds.
(3) If p.t CˇC 1/ D 2, fX n ; n 1g is a sequence of pairwise independent random variables and EX n D 0 and assume that the condition (4) is replaced by the following condition for some˛> 1, then (5) [28] considered the same problems for weighted sums of independent random variables. For the case p.t CˇC 1/ 2, Sung [28] gave the following result: if fX n ; n 1g are independent, EX n D 0 and for some˛< 2=p, then (5) holds. The method to prove the case p.t CˇC 1/ 2 in [28] is to use the complete convergence theorem for arrays of rowwise independent random variables from Sung et al. [29]. The key tool to prove the complete convergence for arrays of rowwise independent random variables is the Hoffman-Jørgensen inequality (see [30]). But we do not know whether the Hoffman-Jørgensen inequality for pairwise independent random variables holds or not.
Proof. The proof is similar to that of Theorem 2.1, so we omit it.
Corollary 2.6. Let fX n ; 1 < n < 1g be a sequence of zero mean pairwise independent random variables which are stochastically dominated by the random variable X (i.e., the inequality (2) holds) satisfying EjX j p.tC2/ < 1; for some 0 < p < 2 and 1 < p.t C 2/ < 2. Let fa n ; 1 < n < 1g be a sequence of real numbers such that 1 X nD 1 ja n j < 1: for each i and n. Then for any r > 0, a ni X iˇ> rn 1=p ! < 1:

Proofs of main results
In order to prove our main results, we need some preliminary lemmas.

Lemma 3.1 ([31]
). Let 1 Ä r Ä 2 and let fX n ; n 1g be a sequence of pairwise independent random variables with EX n D 0 and EjX n j r < 1 for all n 1. Then there exists a positive C r depending only on r, such that E jX k j r ; 8 n 1: The following lemma is well known and its proof is standard. Then if EjX j p < 1, we have (1) n 1 P n kD1 EjX k j p Ä CEjX j p , (2) n 1 P n kD1 EjX  (3) is replaced by the condition (2), then it is easy to see that for all k 1, and X 0 n D XI fjXjÄn 1=p g ; X 00 n D XI fjX j>n 1=p g : (1) Since 0 < p.t CˇC 1/ < 1, we can take a positive constant " such that 0 < p.t CˇC 1/ C " Ä 1. Let u D p.t CˇC 1/, then from Remark 3.3 we have EjX j u I f.k 1/ u=p <jX j u Äk u=p g Ä CEjX j u < 1: Now we choose " > 0 such that p.t CˇC 1/ " q and p.t CˇC 1/ " > 0. Let u D p.t CˇC 1/, then from Remark 2.2 and Remark 3.3, we have p EjX j u " I fk 1=p <jXjÄ.kC1/ 1=p g ÄCEjX j u < 1 which can yield the first result of Theorem 2.1 by combining with (12). (2) p P .jX j > n 1=p / Á : From (13) and (14), we have proved the desired result for the case 1 < p.t CˇC 1/ < 2.
Hence there exists some positive constant N 0 such that Now we choose " > 0 such that 1 " q and 1 " > 0, then we have ÄCEjX j < 1: Based on the above discussions, the second result of the theorem can be proved.