Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access May 2, 2023

Convergence properties for coordinatewise asymptotically negatively associated random vectors in Hilbert space

  • Qihui He EMAIL logo and Lin Pan
From the journal Open Mathematics

Abstract

In this work, the authors study some convergence results including weak law of large numbers, strong law of large numbers, complete convergence, and complete moment convergence for weighted sums of coordinatewise asymptotically negatively associated random vectors in Hilbert spaces. These results improve or extend some corresponding ones in the literature.

MSC 2010: 60F15

1 Introduction

Although random variables are assumed to be independent in many theoretical and methodological studies of the probability limit theory, it is known that the independent assumption is usually not reasonable in the real practice in many statistical problems. As a result, the independent assumption has been extended to many dependent assumptions. One extension of independence is negative association, the concept of which was introduced by Joag-Dev and Proschan [1] as follows.

Definition 1.1

A finite family of random variables { X i , 1 i n } is said to be negatively associated (NA) if for every pair of disjoint subsets A and B of { 1 , 2 , , n } and any real coordinatewise nondecreasing (or nonincreasing) functions f 1 on R A and f 2 on R B ,

Cov ( f 1 ( X i , i A ) , f 2 ( X j , j B ) ) 0 ,

whenever the aforementioned covariance above exists. An infinite family of random variables is NA if every finite subfamily is NA.

Joag-Dev and Proschan [1] stated many natural examples that are all NA. Hence, many scholars showed their interest in the investigation of NA random variables. For some examples, we refer to Shao [2], Kuczmaszewska [3], Baek et al. [4], Kuczmaszewska and Lagodowski [5], and so on.

Due to the fact that NA random variables are important in applications, the concept of NA random variables has also been extended to Hilbert space. Let H be a real separable Hilbert space with the norm generated by an inner product , . Denote X ( j ) = X , e ( j ) , where { e ( j ) , j 1 } is an orthonormal basis in H , and X is an H -valued random vector. Ko et al. [6] introduced the following concept of H -valued NA random vectors.

Definition 1.2

An H -valued random sequence { X n , n 1 } is said to be NA if there exists an orthonormal basis { e ( j ) , j 1 } in H such that for any d 1 , the sequence { ( X n ( 1 ) , X n ( 2 ) , , X n ( d ) ) , n 1 } of R d -valued random vectors is NA.

Ko et al. [6] and Thành [7], respectively, obtained the almost sure convergence for NA random vectors in Hilbert space. Miao [8] established the Hájeck-Rényi inequality for H -valued NA random vectors.

Huan et al. [9] introduced the concept of coordinatewise negatively associated (CNA) random vectors in Hilbert space as follows, which is much wider than the concept of NA random vectors.

Definition 1.3

If for each j 1 , the sequence of random variables { X n ( j ) , n 1 } is NA, where X n ( j ) = X n , e ( j ) , then the sequence of H -valued random vectors { X n , n 1 } is said to be CNA.

Another extension of NA random variables is the asymptotically negatively associated (ANA) random variables, the concept of which was introduced by Zhang and Wang [10] as follows.

Definition 1.4

A sequence { X n , n 1 } of random variables is called ANA if

ρ ( n ) = sup { ρ ( S , T ) : S , T N , dist ( S , T ) n } 0 , as n ,

where

ρ ( S , T ) = 0 Cov ( f ( X i , i S ) , g ( X j , j T ) ) Var ( f ( X i , i S ) ) Var ( g ( X j , j T ) ) : f , g C ,

where C is the set of nondecreasing functions and dist ( S , T ) is the distance between subsets S and T .

As pointed out by Zhang and Wang [10], ANA random variables include ρ ˜ -mixing random variables and NA random variables as special cases. They also gave an example to show that ANA sequences are not necessarily NA or ρ ˜ -mixing. Hence, the study of the convergence properties for ANA random variables is of more interest.

Similar to Definition 1.3, Ko [11] also extended the concept of ANA random variables to coordinatewise asymptotically negatively associated (CANA) random vectors in Hilbert spaces as follows.

Definition 1.5

If for each j 1 , the sequence of random variables { X n ( j ) , n 1 } is ANA, where X n ( j ) = X n , e ( j ) , then the sequence of H -valued random vectors { X n , n 1 } is said to be CANA.

Recall that the concept of complete convergence was introduced by Hsu and Robbins [12] as follows: A sequence of random variables { X n , n 1 } is said to converge completely to a constant C if for any ε > 0 , n = 1 P ( X n C > ε ) < . By the Borel-Cantelli lemma, this implies that X n C almost surely (a.s.). Ko [11] established the complete convergence for CANA random vectors; Wang and Wang [13] investigated the three series theorem and strong law of large numbers for CANA random vectors. However, there is no work studying the weighted sums or the complete moment convergence for this dependent random vectors. The concept of complete moment convergence was introduced by Chow [14] as follows: Let { X n , n 1 } be a sequence of random variables and a n > 0 , b n > 0 , q > 0 . If for all ε > 0 , n = 1 a n E { b n 1 X n ε } + q < , then { X n , n 1 } is said to be complete moment convergent.

In this article, we aim to study the convergence theorems for weighted sums of CANA random vectors in Hilbert space. The weak law of large numbers, strong law of large numbers, complete convergence and complete moment convergence for weighted sums of CANA random vectors are established. These results improve or extend the corresponding results of Ko [11], Wang and Wang [13], and Hien and Thanh [15].

We say that if P ( X n ( j ) > x ) P ( X ( j ) > x ) for all j 1 , n 1 and x 0 , then the sequence of random vectors { X n , n 1 } is said to be coordinatewise stochastically dominated by X , where X n ( j ) = X n , e ( j ) and X ( j ) = X , e ( j ) . For more details about stochastic domination, we refer the readers to Theorem 2.4 of Rosalsky and Thành [16] or Corollary 2.3 of Thành [17], where it shows that in the concept of stochastic domination P ( X n > x ) P ( X > x ) is equivalent to P ( X n > x ) C 1 P ( C 2 X > x ) for some C 1 , C 2 > 0 . In the article, let C be a positive constant whose value may differ in different lines. x + = max { x , 0 } and x = max { x , 0 } . Let log x be the natural logarithm of x and I ( ) be the indicator function. All limits are taken as n .

2 Preliminaries

In this section, we state some preliminary lemmas first.

Lemma 2.1

(Wang and Wang [13]) Let { X n , n 1 } be a sequence of ANA random variables. If { g n ( ) , n 1 } are all increasing or all decreasing functions, then { g n ( X n ) , n 1 } is still a sequence of ANA random variables.

Lemma 2.2

(Wu et al. [18]) Let { X n , n 1 } be a sequence of ANA random variables with E X n = 0 and E X n p < for some 1 p 2 and all n 1 . Then there exists a positive constant C depending only on p and ρ ( ) such that for all n 1 ,

E max 1 m n i = 1 m X i p C i = 1 n E X i p .

Inspired by Wu et al. [19], we have the following lemma.

Lemma 2.3

Let 1 p 2 and { X n , n 1 } be a sequence of CANA random vectors with E X n = 0 for all n 1 . Then there exists a positive constant C depending only on p and ρ ( ) such that for all n 1 ,

E max 1 m n i = 1 m X i p C j = 1 i = 1 n E X i ( j ) p .

In particular,

E max 1 m n i = 1 m X i 2 C i = 1 n E X i 2 .

Proof

We have by C r inequality and Lemma 2.2 that

E max 1 m n i = 1 m X i p = E max 1 m n i = 1 m X i 2 p / 2 = E max 1 m n j = 1 i = 1 m X i , e ( j ) 2 p / 2 E j = 1 max 1 m n i = 1 m X i , e ( j ) 2 p / 2 E j = 1 max 1 m n i = 1 m X i ( j ) p ( by C r inequality )

= j = 1 E max 1 m n i = 1 m X i ( j ) p ( by monotone convergence theorem ) C j = 1 i = 1 n E X i ( j ) p .

Lemma 2.4

(Wu [20]) Let { Z n , n 1 } be a sequence of random variables stochastically dominated by a random variable Z, that is, sup n 1 P ( Z n > x ) P ( Z > x ) for any x 0 . Then for any a > 0 and b > 0 ,

E Z n a I ( Z n > b ) E Z a I ( Z > b ) ; E Z n a I ( Z n b ) [ E Z a I ( Z b ) + b a P ( Z > b ) ] .

3 Main results and their proofs

In this section, we will state the main results and their proofs. To give the first result, we need the following notions. Let { a n i , 1 i n , n 1 } be an array of nonnegative real numbers. For each n 1 and each j 1 , denote

X n i ( j ) = n 1 / p I ( a n i X i ( j ) < n 1 / p ) + a n i X i ( j ) I ( a n i X i ( j ) n 1 / p ) + n 1 / p I ( a n i X i ( j ) > n 1 / p )

and

X n i = j = 1 X n i ( j ) e ( j ) .

Theorem 3.1

Let 1 p < 2 . Let { a n i , 1 i n , n 1 } be an array of nonnegative real numbers and { X n , n 1 } be sequence of H-valued CANA random vectors. If as n ,

  1. j = 1 i = 1 n P ( a n i X i ( j ) > n 1 / p ) 0 ,

  2. n 2 / p j = 1 i = 1 n E a n i X i ( j ) 2 I ( a n i X i ( j ) n 1 / p ) 0 ,

  3. then we obtain the Marcinkiewicz-Zygmund weak law of large numbers

    max 1 m n i = 1 m ( a n i X i E X n i ) n 1 / p P 0 as n .

Proof

It is enough to show that for any ε > 0 ,

(3.1) P max 1 m n i = 1 m ( a n i X i X n i ) > ε n 1 / p 0

and

(3.2) P max 1 m n i = 1 m ( X n i E X n i ) > ε n 1 / p 0 .

It is easy to obtain (3.1) by ( i ) that

P max 1 m n i = 1 m ( a n i X i X n i ) > ε n 1 / p P i = 1 n { a n i X i X n i } i = 1 n P j = 1 { a n i X i ( j ) > n 1 / p } j = 1 i = 1 n P ( a n i X i ( j ) > n 1 / p ) 0 .

Hence, we should further prove (3.2). By Markov’s inequality, Lemma 2.3, ( i ) and ( i i ), we have that

P max 1 m n i = 1 m ( X n i E X n i ) > ε n 1 / p ε 2 n 2 / p E max 1 m n i = 1 m ( X n i E X n i ) 2 C n 2 / p i = 1 n E X n i E X n i 2 C n 2 / p i = 1 n E X n i 2 = C n 2 / p j = 1 i = 1 n E X n i ( j ) 2 C n 2 / p j = 1 i = 1 n E a n i X i ( j ) 2 I ( a n i X i ( j ) n 1 / p ) + C j = 1 i = 1 n P ( a n i X i ( j ) > n 1 / p ) 0 as n .

Remark 3.1

If we take a n i = 1 for each 1 i n , n 1 , and p = 1 , Theorem 3.1 can conclude Theorem 2.1 of Hien and Thanh [15], which is obtained for NA random vectors. Hence, Theorem 3.1 improves and extends the result of Hien and Thanh [15] from partial sums of NA random vectors to maximal weighted sums of CANA random vectors in Hilbert space.

Remark 3.2

In Theorem 3.1, if { n 1 / p } n 1 is replaced by a generic positive sequence { b n } n 1 , the result also holds true, where the proof is similar to that of Theorem 3.1.

By Theorem 3.1, we obtain the following Kolmogorov weak law of large numbers for maximal weighted sums of CANA random vectors in Hilbert space.

Corollary 3.1

Let { a n i , 1 i n , n 1 } be an array of real numbers such that i = 1 n a n i 2 = O ( n ) . Let { X n , n 1 } be sequence of zero mean H-valued CANA random vectors. If { X n , n 1 } is coordinatewise stochastically dominated by a random vector X, then j = 1 E X ( j ) < implies

max 1 m n i = 1 m a n i X i n P 0 as n .

Proof

Note that a n i = a n i + a n i . Without losing generality, we assume that a n i 0 for each 1 i n , n 1 . Otherwise, we can prove the result for a n i + and a n i , respectively. According to Theorem 3.1, we only need to show that conditions ( i ) and ( i i ) hold with p = 1 and

(3.3) n 1 max 1 m n i = 1 m E X n i 0 as n .

It follows by the monotone convergence theorem that

> j = 1 E X ( j ) = lim N j = 1 N E X ( j ) = lim N E j = 1 N X ( j ) = E lim N j = 1 N X ( j ) = E j = 1 X ( j ) ,

which indicates by the dominated convergence theorem that

(3.4) j = 1 E X ( j ) I ( X ( j ) > n δ ) = E j = 1 X ( j ) I ( X ( j ) > n δ ) E j = 1 X ( j ) I j = 1 X ( j ) > n δ 0 as n

for any δ > 0 . It then follows by Lemma 2.4 and (3.4) with δ = 1 that

j = 1 i = 1 n P ( a n i X i ( j ) > n ) = j = 1 i = 1 n P ( a n i X i ( j ) > n , X i ( j ) n ) + j = 1 i = 1 n P ( a n i X i ( j ) > n , X i ( j ) > n ) n 2 j = 1 i = 1 n E a n i X i ( j ) 2 I ( X i ( j ) n ) + j = 1 i = 1 n P ( X i ( j ) > n ) C n 1 j = 1 [ E X ( j ) 2 I ( X ( j ) n ) + n 2 P ( X ( j ) > n ) ] = C n 1 j = 1 0 P ( X ( j ) 2 I ( X ( j ) n ) > s ) d s + n 2 P ( X ( j ) > n ) = C n 1 j = 1 0 n 2 P ( X ( j ) 2 I ( X ( j ) n ) > s ) d s + n 2 P ( X ( j ) > n ) = C n 1 j = 1 0 n 2 ( P ( X ( j ) 2 > s ) P ( X ( j ) > n ) ) d s + n 2 P ( X ( j ) > n ) = C n 1 j = 1 0 n 2 P ( X ( j ) 2 > s ) d s = 2 C n 1 j = 1 0 n t P ( X ( j ) > t ) d t = 2 C n 1 j = 1 0 1 t P ( X ( j ) > t ) d t + 2 C n 1 j = 1 k = 1 n 1 k k + 1 t P ( X ( j ) > t ) d t C n 1 j = 1 E X ( j ) + C n 1 j = 1 k = 1 n k P ( X ( j ) > k ) C n 1 j = 1 E X ( j ) + C n 1 k = 1 n j = 1 E X ( j ) I ( X ( j ) > k ) 0 as n .

Hence, ( i ) has been proven. Similar to the verification of ( i ) , we also obtain ( i i ) by

n 2 j = 1 i = 1 n E a n i X i ( j ) 2 I ( a n i X i ( j ) n ) = n 2 j = 1 i = 1 n E a n i X i ( j ) 2 I ( a n i X i ( j ) n , X i ( j ) n ) + n 2 j = 1 i = 1 n E a n i X i ( j ) 2 I ( a n i X i ( j ) n , X i ( j ) > n ) n 2 j = 1 i = 1 n E a n i X i ( j ) 2 I ( X i ( j ) n ) + j = 1 i = 1 n P ( X i ( j ) > n ) 0 as n .

We also need to verify (3.3). Note by i = 1 n a n i 2 = O ( n ) that max 1 i n a n i ( i = 1 n a n i 2 ) 1 / 2 = O ( n 1 / 2 ) and i = 1 n a n i = O ( n ) . We obtain by E X i = 0 , Lemma 2.4 and (3.4) with δ = 1 / 2 that

n 1 max 1 m n i = 1 m E X n i = n 1 max 1 m n i = 1 m E ( a n i X i X n i ) n 1 j = 1 i = 1 n a n i E X i ( j ) I ( a n i X i ( j ) > n )

n 1 j = 1 i = 1 n a n i E X i ( j ) I ( C X i ( j ) > n 1 / 2 ) C j = 1 E X ( j ) I ( C X ( j ) > n 1 / 2 ) 0 as n .

Theorem 3.2

Let 1 p < 2 and α p 1 . Let { a n i , 1 i n , n 1 } be an array of real numbers such that i = 1 n a n i q = O ( n ) for some p < q 2 . Let { X n , n 1 } be sequence of zero mean H-valued CANA random vectors. If { X n , n 1 } is coordinatewise stochastically dominated by a random vector X, then j = 1 E X ( j ) p < implies that

n = 1 n α p 2 P max 1 m n i = 1 m a n i X i > ε n α < for a n y ε > 0 .

Proof

Without loss of generality, we also assume that a n i 0 for each 1 i n , n 1 . For each n 1 and each j 1 , denote

Y i ( j ) = n α I ( X i ( j ) < n α ) + X i ( j ) I ( X i ( j ) n α ) + n α I ( X i ( j ) > n α ) , Z i ( j ) = X i ( j ) Y i ( j ) = ( X i ( j ) + n α ) I ( X i ( j ) < n α ) + ( X i ( j ) n α ) I ( X i ( j ) > n α ) , Y i = j = 1 Y i ( j ) e ( j ) , and Z i = j = 1 Z i ( j ) e ( j ) .

It is easy to see that

n = 1 n α p 2 P max 1 m n i = 1 m a n i X i > ε n α = n = 1 n α p 2 P max 1 m n i = 1 m a n i j = 1 X i ( j ) e ( j ) > ε n α n = 1 n α p 2 P i = 1 n j = 1 { X i ( j ) > n α } + n = 1 n α p 2 P max 1 m n i = 1 m a n i j = 1 Y i ( j ) e ( j ) > ε n α n = 1 n α p 2 j = 1 i = 1 n P ( X i ( j ) > n α ) + n = 1 n α p 2 P max 1 m n i = 1 m a n i Y i > ε n α I 1 + I 2 .

By the stochastic domination assumption, we have

I 1 j = 1 n = 1 n α p 1 P ( X ( j ) > n α ) = j = 1 n = 1 n α p 1 m = n P ( m α < X ( j ) ( m + 1 ) α ) = j = 1 m = 1 P ( m α < X ( j ) ( m + 1 ) α ) n = 1 m n α p 1 C j = 1 m = 1 m α p P ( m α < X ( j ) ( m + 1 ) α ) C j = 1 E X ( j ) p < .

To prove I 2 , we first show that

n α max 1 m n i = 1 m a n i E Y i 0 as n .

Note by the Hölder inequality that i = 1 n a n i = O ( n ) . So we have by the zero mean assumption that

n α max 1 m n i = 1 m a n i E Y i = n α max 1 m n i = 1 m a n i E Z i n α j = 1 i = 1 n a n i E X i ( j ) I ( X i ( j ) > n α ) C n 1 α j = 1 E X ( j ) I ( X ( j ) > n α ) C n 1 α p j = 1 E X ( j ) p I ( X ( j ) > n α ) C j = 1 E X ( j ) p I ( X ( j ) > n α ) 0 as n .

Therefore, when n is large enough,

(3.5) max 1 m n i = 1 m a n i E Y i ε n α / 2 .

Note by Lemma 2.1 that { a n i Y i ( j ) , 1 i n , n 1 } is ANA for any j 1 , one can see that { a n i ( Y i E Y i ) , 1 i n , n 1 } is CANA. Hence, we have by Markov inequality, C r inequality, Lemmas 2.3 and 2.4, i = 1 n a n i q = O ( n ) and (3.5) that

I 2 C n = 1 n α p 2 P max 1 m n i = 1 m a n i ( Y i E Y i ) > ε n α / 2 C n = 1 n α p α q 2 E max 1 m n i = 1 m a n i ( Y i E Y i ) q C j = 1 n = 1 n α p α q 2 i = 1 n a n i q E Y i ( j ) q C j = 1 n = 1 n α p 2 i = 1 n a n i q P ( X ( j ) > n α ) + C j = 1 n = 1 n α p α q 2 i = 1 n a n i q E X ( j ) q I ( X ( j ) n α ) C j = 1 n = 1 n α p 1 P ( X ( j ) > n α ) + C j = 1 n = 1 n α p α q 1 E X ( j ) q I ( X ( j ) n α ) I 21 + I 22 .

Similar to the proof of I 1 < , we have I 21 < . Finally, we will prove I 22 . It is easy to see that

I 22 C j = 1 n = 1 n α p α q 1 E X ( j ) q I ( X ( j ) n α ) = C j = 1 n = 1 n α p α q 1 m = 1 n E X ( j ) q I ( ( m 1 ) α < X ( j ) m α ) = C j = 1 m = 1 E X ( j ) q I ( ( m 1 ) α < X ( j ) m α ) n = m n α p α q 1 C j = 1 m = 1 m α p α q E X ( j ) q I ( ( m 1 ) α < X ( j ) m α ) C j = 1 E X ( j ) p < .

Remark 3.3

Ko [11] obtained the complete convergence for maximal partial sums for CANA random vectors under the condition α p > 1 . It is easy to see that Theorem 3.2 improves the result of Ko [11] from partial sums to weighted sums and generalize the condition α p > 1 of Ko [11] to α p 1 .

By Theorem 3.2, we can obtain the following Marcinkiewicz-Zygmund strong law of large numbers for weighted sums of CANA random vectors in Hilbert space.

Theorem 3.3

Let 1 p < 2 . Let { a n , n 1 } be a sequence of real numbers such that i = 1 n a i q = O ( n ) for some p < q 2 and { X n , n 1 } be a sequence of zero mean H-valued CANA random vectors. If { X n , n 1 } is coordinatewise stochastically dominated by a random vector X, then j = 1 E X ( j ) p < implies that

1 n 1 / p i = 1 n a i X i 0 a.s. a s n .

Proof

Applying Theorem 3.2 with a n i = a i , for each 1 i n , n 1 and α = 1 / p , we have that for any ε > 0 ,

> n = 1 n 1 P max 1 m n i = 1 m a i X i > ε n 1 / p = k = 0 n = 2 k 2 k + 1 1 n 1 P max 1 m n i = 1 m a i X i > ε n 1 / p 1 2 k = 0 P max 1 m 2 k i = 1 m a i X i > ε ( 2 k + 1 ) 1 / p .

Hence, by Borel-Cantelli lemma, the aforementioned formula means that as k ,

1 ( 2 k ) 1 / p max 1 m 2 k + 1 i = 1 m a i X i 0 a.s.

Note that for any fixed n , there exists positive integer m such that 2 k n < 2 k + 1 , we have

1 n 1 / p i = 1 n a i X i 1 ( 2 k ) 1 / p max 1 m 2 k + 1 i = 1 m a i X i 0 a.s.

Remark 3.4

If we take a i = 1 for each 1 i n , n 1 , and p = 1 , Theorem 3.3 concludes Theorem 2.4 of Wang and Wang [13] for partial sums. Hence, Theorem 3.1 improves and extends the result of Wang and Wang [13] from the Kolmogorov strong law of large numbers for partial sums to Marcinkiewicz-Zygmund strong law of large numbers for weighted sums.

Theorem 3.4

Let 0 < r < 2 , 1 p < 2 , and α p 1 . Let { a n i , 1 i n , n 1 } be an array of real numbers such that i = 1 n a n i q = O ( n ) for some max { r , p } < q 2 . Let { X n , n 1 } be a sequence of zero mean H-valued CANA random vectors coordinatewise stochastically dominated by a random vector X. Then

j = 1 E X ( j ) p < , i f 0 < r < p , j = 1 E X ( j ) p log ( 1 + X ( j ) ) < , i f r = p , j = 1 E X ( j ) r < , i f p < r < 2 ,

which implies that

n = 1 n α p α r 2 n α r P max 1 m n i = 1 m a n i X i > t 1 / r d t < .

Proof

Without the loss of generality, we assume that a n i 0 for each 1 i n , n 1 . For any t > 0 and each j 1 , we denote

U i ( j ) = t 1 / r I ( X i ( j ) < t 1 / r ) + X i ( j ) I ( X i ( j ) t 1 / r ) + t 1 / r I ( X i ( j ) > t 1 / r ) , V i ( j ) = X i ( j ) U i ( j ) = ( X i ( j ) + t 1 / r ) I ( X i ( j ) < t 1 / r ) + ( X i ( j ) t 1 / r ) I ( X i ( j ) > t 1 / r ) , U i = j = 1 U i ( j ) e ( j ) , and V i = j = 1 V i ( j ) e ( j ) .

We have by Lemma 2.1 that { U i ( j ) , 1 i n } are still ANA random variables for each j 1 . It is easily seen that

n = 1 n α p α r 2 n α r P max 1 m n i = 1 m a n i X i > t 1 / r d t = n = 1 n α p α r 2 n α r P max 1 m n i = 1 m a n i j = 1 X i ( j ) e j > t 1 / r d t n = 1 n α p α r 2 n α r P i = 1 n j = 1 { X i ( j ) > t 1 / r } d t + n = 1 n α p α r 2 n α r P max 1 m n i = 1 m a n i j = 1 U i ( j ) e j > t 1 / r d t n = 1 n α p α r 2 n α r j = 1 i = 1 n P ( X i ( j ) > t 1 / r ) d t + n = 1 n α p α r 2 n α r P max 1 m n i = 1 m a n i U i > t 1 / r d t J 1 + J 2 .

By stochastic domination and E X = 0 P ( X > t ) d t , we have that

J 1 j = 1 n = 1 n α p α r 1 n α r P ( X ( j ) > t 1 / r ) d t = j = 1 n = 1 n α p α r 1 n α r P ( X ( j ) I ( X ( j ) > n α ) > t 1 / r ) d t j = 1 n = 1 n α p α r 1 0 P ( X ( j ) r I ( X ( j ) > n α ) > t ) d t = j = 1 n = 1 n α p α r 1 E X ( j ) r I ( X ( j ) > n α ) = j = 1 k = 1 E X ( j ) r I ( k α < X ( j ) ( k + 1 ) α ) n = 1 k n α p α r 1 .

Therefore, if r < p ,

J 1 C j = 1 k = 1 k α p α r E X ( j ) r I ( k α < X ( j ) ( k + 1 ) α ) C j = 1 E X ( j ) p < .

If r = p ,

J 1 C j = 1 k = 1 log k E X ( j ) p I ( k α < X ( j ) ( k + 1 ) α ) C j = 1 E X ( j ) p log ( 1 + X ( j ) ) < .

If r > p ,

J 1 C j = 1 k = 1 E X ( j ) r I ( k α < X ( j ) ( k + 1 ) α ) C j = 1 E X ( j ) r < .

To estimate J 2 , we first show that

sup t n α r t 1 / r max 1 m n i = 1 m a n i E U i 0 as n .

Note by the Hölder inequality that i = 1 n a n i = O ( n ) , we have by E X i = 0 and (3.4) with δ = α that

sup t n α r t 1 / r max 1 m n i = 1 m a n i E U i = sup t n α r t 1 / r max 1 m n i = 1 m a n i E V i n α sup t n α r j = 1 i = 1 n a n i E X i ( j ) I ( X i ( j ) > t 1 / r ) n α j = 1 i = 1 n a n i E X i ( j ) I ( X i ( j ) > n α ) C n 1 α j = 1 E X ( j ) I ( X ( j ) > n α ) C n 1 α p j = 1 E X ( j ) p I ( X ( j ) > n α ) C j = 1 E X ( j ) p I ( X ( j ) > n α ) 0 as n .

Therefore, when n is large enough, for any t n α r ,

(3.6) max 1 m n i = 1 m a n i E U i t 1 / r / 2 .

Since { a n i U i ( j ) , 1 i n , n 1 } is ANA for each j 1 , { a n i ( U i E U i ) , 1 i n , n 1 } is CANA. Thus, by Markov inequality, C r inequality, Lemmas 2.3 and 2.4, i = 1 n a n i q = O ( n ) , and (3.6),

J 2 C n = 1 n α p α r 2 n α q P max 1 m n i = 1 m a n i ( U i E U i ) > t 1 / r / 2 d t C n = 1 n α p α r 2 n α r t q / r E max 1 m n i = 1 m a n i ( U i E U i ) q d t C j = 1 n = 1 n α p α r 2 n α r t q / r i = 1 n a n i q E U i ( j ) q d t C j = 1 n = 1 n α p α r 2 n α r i = 1 n a n i q P ( X ( j ) > t 1 / r ) d t + C j = 1 n = 1 n α p α r 2 n α r t q / r i = 1 n a n i q E X ( j ) q I ( X ( j ) t 1 / r ) d t C j = 1 n = 1 n α p α r 1 n α r P ( X ( j ) > t 1 / r ) d t + C j = 1 n = 1 n α p α r 1 n α r t q / r E X ( j ) q I ( X ( j ) t 1 / r ) d t J 21 + J 22 .

Similar to the proof of J 1 < , we have that J 21 < . Now we prove J 22 . It is easy to obtain that

J 22 C j = 1 n = 1 n α p α r 1 m = n m α r ( m + 1 ) α r t q / r E X ( j ) q I ( X ( j ) t 1 / r ) d t C j = 1 n = 1 n α p α r 1 m = n m α r α q 1 E X ( j ) q I ( X ( j ) ( m + 1 ) α ) = C j = 1 m = 1 m α r α q 1 E X ( j ) q I ( X ( j ) ( m + 1 ) α ) n = 1 m n α p α r 1 .

Therefore, if r < p , we have that

J 22 C j = 1 m = 1 m α p α q 1 E X ( j ) q I ( X ( j ) ( m + 1 ) α ) C j = 1 m = 1 m α p α q 1 E X ( j ) q I ( X ( j ) 1 ) + C j = 1 m = 1 m α p α q 1 l = 1 m E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) C j = 1 E X ( j ) p + C j = 1 l = 1 E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) m = l m α p α q 1 C j = 1 E X ( j ) p + C j = 1 l = 1 l α p α q E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) C j = 1 E X ( j ) p < .

If r = p , we have

J 22 C j = 1 m = 1 m α p α q 1 log m E X ( j ) q I ( X ( j ) ( m + 1 ) α ) C j = 1 m = 1 m α p α q 1 log m E X ( j ) q I ( X ( j ) 1 ) + C j = 1 m = 1 m α p α q 1 log m l = 1 m E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) C j = 1 E X ( j ) q I ( X ( j ) 1 ) + C j = 1 l = 1 E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) m = l m α p α q 1 log m C j = 1 E X ( j ) p + C j = 1 l = 1 l α p α q log l E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) C j = 1 E X ( j ) p + C j = 1 E X ( j ) p log ( 1 + X ( j ) ) < .

If r > p , we also have that

J 22 C j = 1 m = 1 m α r α q 1 E X ( j ) q I ( X ( j ) ( m + 1 ) α ) C j = 1 m = 1 m α r α q 1 E X ( j ) q I ( X ( j ) 1 ) + C j = 1 m = 1 m α r α q 1 l = 1 m E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) C j = 1 E X ( j ) r + C j = 1 l = 1 E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) m = l m α r α q 1 C j = 1 E X ( j ) r + C j = 1 l = 1 l α r α q E X ( j ) q I ( l α < X ( j ) ( l + 1 ) α ) C j = 1 E X ( j ) r < .

Theorem 3.5

Let 1 p < 2 and α p 1 . Let { a n i , 1 i n , n 1 } be an array of real numbers such that i = 1 n a n i q = O ( n ) for some max { r , p } < q 2 . Let { X n , n 1 } be a sequence of zero mean H-valued CANA random vectors. Suppose that { X n , n 1 } is coordinatewise stochastically dominated by a random vector X. If

j = 1 E X ( j ) p < , i f 0 < r < p , j = 1 E X ( j ) p log ( 1 + X ( j ) ) < , i f r = p , j = 1 E X ( j ) r < , i f p < r < 2 ,

then

n = 1 n α p α r 2 E max 1 m n i = 1 m a n i X i ε n α + r < .

Proof

From Theorems 3.2 and 3.4, we have

n = 1 n α p α r 2 E max 1 m n i = 1 m a n i X i ε n α + r = n = 1 n α p α r 2 0 P max 1 m n i = 1 m a n i X i ε n α > t 1 / r d t = n = 1 n α p α r 2 0 n α r P max 1 m n i = 1 m a n i X i ε n α > t 1 / r d t + n = 1 n α p α r 2 n α r P max 1 m n i = 1 m a n i X i ε n α > t 1 / r d t n = 1 n α p 2 P max 1 m n i = 1 m a n i X i > ε n α + n = 1 n α p α r 2 n α r P max 1 m n i = 1 m a n i X i > t 1 / r d t < .

Remark 3.5

Ko [21] obtained the complete moment convergence for maximum partial sums of CNA random vectors with r = 1 under the condition α p > 1 . In Theorem 3.5, we assume that α p 1 . Hence, Theorem 3.5 improves and extends the result of Ko [21] from maximal partial sums of CNA random vectors to maximal weighted sums of CANA random vectors.

Acknowledgment

The authors would like to thank the editor and the referees for their helpful comments that improve the quality of the article.

  1. Funding information: This research was supported by the Key project of Anhui Provincial Party School system (QS202105).

  2. Author contributions: Qihui He was a major contributor in writing the manuscript. Lin Pan checked the article and gave some suggestions in revising the article. Both the authors read and approved the final manuscript.

  3. Conflict of interest: The authors declare no conflict of interest in this article.

References

[1] K. Joag-Dev and F. Proschan, Negative association of random variables with applications, Ann. Statist. 11 (1983), no. 1, 286–295, DOI: https://doi.org/10.1214/aos/1176346079. 10.1214/aos/1176346079Search in Google Scholar

[2] Q. M. Shao, A comparison theorem on moment inequalities between negatively associated and independent random variables, J. Theoret. Probab. 13 (2000), 343–356, DOI: https://doi.org/10.1023/A:1007849609234. 10.1023/A:1007849609234Search in Google Scholar

[3] A. Kuczmaszewska, On complete convergence in Marcinkiewicz-Zygmund type SLLN for negatively associated random variables, Acta Math. Hungar. 128 (2010), 116–130, DOI: https://doi.org/10.1007/s10474-009-9166-y. 10.1007/s10474-009-9166-ySearch in Google Scholar

[4] J. I. Baek, I. B. Choi, and S. L. Niu, On the complete convergence of weighted sums for arrays of negatively associated variables, J. Korean Statist. Soc. 37 (2008), 73–80, DOI: https://doi.org/10.1016/j.jkss.2007.08.001. 10.1016/j.jkss.2007.08.001Search in Google Scholar

[5] A. Kuczmaszewska and Z. A. Lagodowski, Convergence rates in the SLLN for some classes of dependent random fields, J. Math. Anal. Appl. 380 (2011), no. 2, 571–584, DOI: https://doi.org/10.1016/j.jmaa.2011.03.042. 10.1016/j.jmaa.2011.03.042Search in Google Scholar

[6] M. H. Ko, T. S. Kim, and K. H. Han, A note on the almost sure convergence for dependent random variables in a Hilbert space, J. Theoret. Probab. 22 (2009), 506–513, DOI: https://doi.org/10.1007/s10959-008-0144-z. 10.1007/s10959-008-0144-zSearch in Google Scholar

[7] L. V. Thành, On the almost sure convergence for dependent random vectors in Hilbert spaces, Acta Math. Hungar. 139 (2013), no. 3, 276–285, DOI: https://doi.org/10.1007/s10474-012-0275-7. 10.1007/s10474-012-0275-7Search in Google Scholar

[8] Y. Miao, Hájeck-Rényi inequality for dependent random variables in Hilbert space and applications, Rev. Union Mat. Argent. 53 (2012), no. 1, 101–112. Search in Google Scholar

[9] N. V. Huan, N. V. Quang, and N. T. Thuan, Baum-Katz type theorems for coordinatewise negatively associated random vectors in Hilbert spaces, Acta Math. Hungar. 144 (2014), no. 1, 132–149, DOI: https://doi.org/10.1007/s10474-014-0424-2. 10.1007/s10474-014-0424-2Search in Google Scholar

[10] L. Zhang and X. Wang, Convergence rates in the strong laws of asymptotically negatively associated random fields, Appl. Math. J. Chinese Univ. Ser. B 14 (1999), no. 4, 406–416, DOI: https://doi.org/10.1007/s11766-999-0070-6. 10.1007/s11766-999-0070-6Search in Google Scholar

[11] M. H. Ko, Complete convergence for coordinatewise asymptotically negatively associated random vectors in Hilbert spaces, Comm. Statist. Theory Methods 47 (2018), no. 3, 671–680, DOI: https://doi.org/10.1080/03610926.2017.1310242. 10.1080/03610926.2017.1310242Search in Google Scholar

[12] P. L. Hsu and H. Robbins, Complete convergence and the law of large numbers, Proc. Natl. Acad. Sci. USA 33 (1947), 25–31, DOI: https://doi.org/10.1073/pnas.33.2.25. 10.1073/pnas.33.2.25Search in Google Scholar PubMed PubMed Central

[13] K. Wang and X. Wang, Strong convergence properties for partial sums of asymptotically negatively associated random vectors in Hilbert spaces, Comm. Statist. Theory Methods 49 (2020), no. 22, 5578–5586, DOI: https://doi.org/10.1080/03610926.2019.1620279. 10.1080/03610926.2019.1620279Search in Google Scholar

[14] Y. S. Chow, On the rate of moment complete convergence of sample sums and extremes, Bull. Inst. Math. Acad. Sin. 16 (1988), no. 3, 177–201. Search in Google Scholar

[15] N. T. T. Hien and L. V. Thanh, On the weak laws of large numbers for sums of negatively associated random vectors in Hilbert spaces, Statist. Probab. Lett. 107 (2015), 236–245, DOI: https://doi.org/10.1016/j.spl.2015.08.030. 10.1016/j.spl.2015.08.030Search in Google Scholar

[16] A. Rosalsky and L. V. Thành, A note on the stochastic domination condition and uniform integrability with applications to the strong law of large numbers, Statist. Probab. Lett. 178 (2021), 109181, DOI: https://doi.org/10.1016/j.spl.2021.109181. 10.1016/j.spl.2021.109181Search in Google Scholar

[17] L. V. Thành, On a new concept of stochastic domination and the laws of large numbers, TEST 32 (2023), 74–106, DOI: https://doi.org/10.1007/s11749-022-00827-w. 10.1007/s11749-022-00827-wSearch in Google Scholar

[18] Y. Wu, X. Wang, and A. Shen, Strong convergence properties for weighted sums of m-asymptotic negatively associated random variables and statistical applications, Statist. Papers 62 (2021), no. 5, 2169–2194, DOI: https://doi.org/10.1007/s00362-020-01179-z. 10.1007/s00362-020-01179-zSearch in Google Scholar

[19] Y. Wu, F. Zhang, and X. Wang, Convergence properties for weighted sums of weakly dependent random vectors in Hilbert spaces, Stochastics 92 (2020), no. 5, 716–731, DOI: https://doi.org/10.1080/17442508.2019.1652607. 10.1080/17442508.2019.1652607Search in Google Scholar

[20] Q. Y. Wu, Probability Limit Theory for Mixing Sequences, Science Press, Beijing, 2006. Search in Google Scholar

[21] M. H. Ko, The complete moment convergence for CNA random vectors in Hilbert spaces, J. Inequal. Appl. 2017 (2017), 290, DOI: https://doi.org/10.1186/s13660-017-1566-x. 10.1186/s13660-017-1566-xSearch in Google Scholar PubMed PubMed Central

Received: 2022-06-11
Revised: 2022-12-21
Accepted: 2023-01-21
Published Online: 2023-05-02

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.4.2024 from https://www.degruyter.com/document/doi/10.1515/math-2022-0556/html
Scroll to top button