Citation: | MENG Bing, WU Qunying, . Complete Convergence and Complete Moment Convergence for Weighted Sums of ANA Random Variables[J]. Chinese Journal of Applied Probability and Statistics, 2024, 40(5): 710-724. |
In this paper, we investigate the complete convergence and complete moment convergence for weighted sums of arrays of rowwise asymptotically negatively associated (ANA) random variables, without assuming identical distribution. The obtained results not only extend those of An and Yuan[1] and Shen et al.[2] to the case of ANA random variables, but also partially improve them.
Let {Xn;n⩾ be a sequence of random variables defined on a fixed probability space \left(\Omega, \mathscr{F}, P\right) . Hsu and Robbins[3] introduced the concept of complete convergence as follows. A sequence \{X_{n};n{\geqslant} 1\} of random variables are said to converge completely to a constant {\mu} if for all \varepsilon>0 ,
\begin{equation} \sum\limits_{n=1}^\infty {\sf P}(|X_{n}-\mu|>\varepsilon)<\infty. \end{equation} | (1) |
In view of the Borel-Cantelli lemma, this implies that X_{n}\rightarrow \mu almost surely (a.s.). The converse is true if \{X_{n};n{\geqslant} 1\} is independent. Hsu and Robbins[3] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Erdös[4] proved the converse. The result of Hsu and Robbins[3] and Erdös[4] is a fundamental theorem in probability theory which has been generalized and extended in several directions by many authors. One of the most important results is Baum and Katz[5] for the strong law of large numbers as follows.
Theorem 1 Assume that 1/2<\alpha {\leqslant}1 and \alpha r>1 . Let \{X_{n}; n{\geqslant} 1\} be an independent sequence of identically distributed random variables. Assume further that {\sf E}(X_{1})=0 for \alpha{\leqslant}1 . If {\sf E}\left( \left|X_{1}\right|^{r}\right) <\infty , then for all \varepsilon>0 ,
\begin{equation} \sum\limits_{n=1}^{\infty}n^{\alpha r-2} {\sf P}\left(\max\limits_{1{\leqslant} k{\leqslant} n}\left|\sum\limits_{i=1}^{k}X_{i}\right|> \varepsilon n^{\alpha}\right)<\infty. \end{equation} | (2) |
An and Yuan[1] extended Theorem 1 to the case of weighted sums for {{\rho }^{*}} -mixing sequence, and established the following result.
Theorem 2 Assume that \alpha>1/2 and \alpha r>1 . Let \{X_{n}; n{\geqslant} 1\} be a {{\rho }^{*}} -mixing sequence of identically distributed random variables. Let \{a_{ni}; 1{\leqslant} i{\leqslant} n, n{\geqslant}1\} be an array of real numbers such that \sum_{i=1}^{n}\left|a_{ni}\right|^{r}=O(n^{\delta}) for some 0<\delta<1 . Assume further that {\sf E}(X_{1})=0 for \alpha{\leqslant}1 . If {\sf E}\left( \left|X_{1}\right|^{r}\right) <\infty , then for all \varepsilon>0 ,
\begin{equation} \sum\limits_{n=1}^{\infty}n^{\alpha r-2} {\sf P}\left(\max\limits_{1{\leqslant} k{\leqslant} n}\left|\sum\limits_{i=1}^{k}a_{ni}X_{i}\right|> \varepsilon n^{\alpha}\right)<\infty. \end{equation} | (3) |
Recently, Shen et al.[2] extended and improved Theorem 2 to the case of weighted sums of extended negatively dependent (END, in short) random variables, and obtained the following results.
Theorem 3 Let 0<r<2, \alpha>0, \alpha r>1 , and \{X_{n}; n{\geqslant} 1\} be a sequence of END random variables, which is stochastically dominated by a random variable X . Let h(x)>0 be a slowly varying function. Assume further that {\sf E}(X_{n})=0 for r>1 , and that \{a_{ni}; 1{\leqslant} i{\leqslant} n, n{\geqslant}1\} is an array of real numbers such that
\begin{equation} \sum\limits_{i=1}^{n}a_{ni}^{2}=O\left(n\right). \end{equation} | (4) |
If
\begin{equation} {\sf E}\left( \left|X\right|^{r}\right) h(\left|X\right|^{1/\alpha})<\infty, \end{equation} | (5) |
then for all \varepsilon>0 ,
\begin{equation} \sum\limits_{n=1}^\infty n^{\alpha r-2}h(n) {\sf P}\left(\left|\sum\limits_{i=1}^n a_{ni}X_{n}\right|>\varepsilon n^{\alpha}\right)<\infty. \end{equation} | (6) |
Theorem 4 Assume that the conditions of Theorem 3 hold and 1<r<2 , and then for all \varepsilon>0 ,
\begin{equation} \sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n) {\sf E}\left(\left|\sum\limits_{i=1}^n a_{ni}X_{n}\right|-\varepsilon n^{\alpha}\right)^{+}<\infty. \end{equation} | (7) |
Inspired by the above theorems, in this paper, we further investigate the Baum-Katz type results for weighted sums of arrays of rowwise ANA sequences, which extend and improve Theorem 3 and Theorem 4 to the maximum of weighted partial sums of ANA random variables under mild conditions. The proof method for the main results of this paper refers to Shen et al.[2].
In the following, we introduce some definitions of dependent structures.
Definition 1 A sequence of random variables \left\{ {{X}_{n}};n{\geqslant} 1 \right\} is said to be ANA if
\begin{equation*} {{\rho }^{-}}\left(s \right)=\sup \left\{ {{\rho }^{-}}\left(S, T \right):S, T\subset \text{N}, \text{dist}\left(S, T \right){\geqslant} s \right\}\to 0, \quad \text{ as }s\to \infty , \end{equation*} |
where
\begin{equation*} {{\rho }^{-}}\left(S, T \right)=0\vee \left\{\frac{ {\sf Cov}\, \left( f_{1}\left( {{X}_{i}}, i\in S \right), f_{2}\left( {{X}_{j}}, j\in T \right) \right)}{\left[ {\sf Var}\, \left\{f_{1}\left( {{X}_{i}}, i\in S \right)\right\} {\sf Var}\, \left\{f_{2}\left( {{X}_{j}}, j\in T \right)\right\} \right]^{1/2}}, f_{1}, f_{2}\in C \right\}, \end{equation*} |
and C is the set of non-decreasing functions.
Definition 2 A sequence of random variables \left\{ {{X}_{n}};n{\geqslant} 1 \right\} is called \rho^{*} -mixing if for some integer s{\geqslant} 1 , the mixing coefficient
\begin{equation*} \rho^{*}\left( s \right)=\sup \left\{ \rho \left( S, T \right):S, T\subset \text{N}, \text{dist}\left(S, T \right){\geqslant} s \right\}<1, \quad \text{ as } s\to \infty, \end{equation*} |
where
\begin{equation*} \rho \left( S, T \right)=\sup \left\{ \frac{ {\sf Cov}\, (X, Y)}{\sqrt{ {\sf Var}\, (X)}\sqrt{ {\sf Var}\, (Y)}};X\in {{L}_{2}}\left( \sigma (S) \right), Y\in {{L}_{2}}\left( \sigma (T) \right) \right\}, \end{equation*} |
\sigma (S) and \sigma (T) are the \sigma -fields generated by \left\{ {{X}_{i}}, i\in S \right\} and \left\{ {{X}_{i}}, i\in T \right\} , respectively.
An array of random variables \left\{ {{X}_{ni}};i{\geqslant} 1, n{\geqslant} 1 \right\} is said to be rowwise ANA random variables if for every n{\geqslant} 1 , \left\{ {{X}_{ni}};i{\geqslant} 1 \right\} is a sequence of ANA random variables.
It is easy to see that {{\rho }^{-}}\left( s \right){\leqslant} \rho^{*}\left( s \right) , and a sequence of ANA random variables is NA if and only if {{\rho }^{-}}\left( 1 \right)=0 . This implies that ANA random variables are NA and \rho^{*} -mixing random variables. Therefore, it is of great interest to extend and improve the limit properties of NA and \rho^{*} -mixing to the case of ANA random variables. Since the concept of ANA random variables was introduced by Zhang and Wang[6], many applications have been found. For example, Zhang and Wang[6] obtained the convergence rates in the strong laws of ANA random fields; Zhang[7–8] investigated the central limit theorems; Wang and Lu[9] studied some moment inequalities of the maximum of partial sums and weak convergence; Wang and Zhang[10] obtained the Berry-Esseen theorem and the law of the iterated logarithm; Budsaba et al.[11] provided some results on complete convergence for moving average process; Liu and Liu[12] established moments of the maximum of normed partial sums; Yuan and Wu[13] obtained the limiting behavior of the maximum of partial sums for ANA random variables under residual Cesàro alpha-integrability assumption; Tan et al.[14] obtained the almost sure central limit theorem of products of partial sums; Ko[15] obtained the Hájek-Rényi inequality and the strong law of large numbers; Zhang[16] investigated the complete moment convergence for moving average process; Huang et al.[17] obtained the complete convergence and complete moment convergence for arrays of rowwise ANA random variables, and so on. In this paper, we further study complete convergence and complete moment convergence for weighted sums of ANA random variables without assuming identical distribution.
Definition 3 Let \left\{ {{X}_{n}};n{\geqslant} 1 \right\} be a sequence of random variables, and {{a}_{n}}>0 , {{b}_{n}}>0 , p>0 , and then for all \varepsilon {\geqslant} 0 ,
\begin{equation*} \sum\limits_{n=1}^{\infty }{{{a}_{n}} {\sf E}\left( b_{n}^{-1}\left| {{X}_{n}} \right|-\varepsilon \right)_{+}^{p}}<\infty. \end{equation*} |
The above result was called the complete moment convergence by Chow[18].
Definition 4 A sequence of random variables \left\{ {{X}_{n}};n{\geqslant} 1 \right\} is said to be stochastically dominated by a random variable X if there exists a positive constant C such that
\begin{equation*} {\sf P}\left( \left| {{X}_{n}} \right|{\geqslant} x \right){\leqslant} C {\sf P}\left( \left| X \right|{\geqslant} x \right), \end{equation*} |
for all x{\geqslant} 0 and n{\geqslant} 1 .
An array of rowwise random variables \left\{ {{X}_{ni}};i{\geqslant} 1, n{\geqslant} 1 \right\} is said to be stochastically dominated by a random variable X if there exists a positive constant C such that
\begin{equation*} {\sf P}\left( \left| {{X}_{ni}} \right|{\geqslant} x \right){\leqslant} C {\sf P}\left( \left| X \right|{\geqslant} x \right), \end{equation*} |
for all x{\geqslant} 0 , i{\geqslant} 1 and n{\geqslant} 1 .
Throughout this paper, the symbol C denotes a positive constant which is not necessarily the same one in each appearance, and a_{n}=O(b_{n}) stands for a_{n}{\leqslant} Cb_{n}. Let I(A) be the indicator function of the set A .
Now, we state the main results as follows. The proofs will be given in Section 3.
Theorem 5 Let 0<r<2, \alpha>0, \alpha r>1 , and \{X_{ni}; 1{\leqslant} i{\leqslant} n, n{\geqslant} 1\} be an array of rowwise ANA random variables, which are stochastically dominated by a random variable X . Let h(x)>0 be a slowly varying and monotone non-decreasing function. Assume further that {\sf E}(X_{ni})=0 for r>1 , and \{a_{ni}; 1{\leqslant} i{\leqslant} n, n{\geqslant}1\} be an array of real numbers satisfying (4), if (5) holds, then for all \varepsilon>0 ,
\begin{equation} \sum\limits_{n=1}^\infty n^{\alpha r-2}h(n) {\sf P}\left(\max\limits_{1{\leqslant} j{\leqslant} n}\left|\sum\limits_{i=1}^j a_{ni}X_{ni}\right|>\varepsilon n^{\alpha}\right)<\infty. \end{equation} | (8) |
Theorem 6 Assume that the conditions of Theorem 5 hold and 1<r<2 , and then for all \varepsilon>0 ,
\begin{equation} \sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n) {\sf E}\left(\max\limits_{1{\leqslant} j{\leqslant} n}\left|\sum\limits_{i=1}^j a_{ni}X_{ni}\right|-\varepsilon n^{\alpha}\right)^{+}<\infty. \end{equation} | (9) |
Remark 1 Since ANA includes NA and \rho^{*} -mixing, Theorem 5 extends Theorem 2 for weighted sums of identically distributed \rho^{*} -mixing random variables.
Remark 2 There are many examples of slowly varying functions which are positive and monotone non decreasing, such as h(x)=1 , h(x)=\log x , and so forth.
Remark 3 Under the conditions of Theorem 6, we can obtain
\begin{align} \infty &> {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n) {\sf E}\left(\max\limits_{1{\leqslant} j{\leqslant} n}\left|\sum\limits_{i=1}^j a_{ni}X_{ni}\right|-\varepsilon n^{\alpha}\right)^{+}}\\ &= {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{0}^{\infty }{ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|-\varepsilon n^{\alpha}>x \right) \mathrm{d} x}} \\ &{\geqslant} {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{0}^{n^{\alpha}}{ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>\varepsilon n^{\alpha} \right) \mathrm{d} x}} \\ &= {\sum\limits_{n=1}^\infty n^{\alpha r-2}h(n){ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>\varepsilon n^{\alpha}\right)}}, \quad \text{ for }\forall \varepsilon >0. \end{align} | (10) |
Hence, from (10), we can see that the complete moment convergence is more general than the complete convergence.
To prove the main results, we need the following lemmas.
Lemma 7 (Wang and Lu[9]) Let \left\{ {{X}_{n}}; n{\geqslant} 1 \right\} be a sequence of ANA random variables. If \left\{ {{f}_{n}}; n{\geqslant} 1 \right\} is a sequence of real functions all of which are monotone non-decreasing (or all monotone non-increasing), then \left\{ {{f}_{n}}\left( {{X}_{n}} \right); n{\geqslant} 1 \right\} is still a sequence of ANA random variables.
Lemma 8 (Wang and Lu[9]) For a positive real number q{\geqslant}2 , if \left\{ {{X}_{n}}; n{\geqslant} 1 \right\} is a sequence of ANA random variables with {\sf E}({X}_{n})=0 and {\sf E}\left( {\left| {{X}_{n}} \right|}^{q}\right) <\infty , then for all n{\geqslant} 1 , there exists a positive constant C=C\left( q, \rho^{-}(\cdot) \right) such that
\begin{align} {\sf E}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, {{\left| \sum\limits_{i=1}^{j}{{{X}_{i}}} \right|}^{q}} \right){\leqslant} C\left(\sum\limits_{i=1}^{n}{ {\sf E}\left( \left|X_{i}\right|^{q}\right) }+\left(\sum\limits_{i=1}^{n}{ {\sf E}\left( X_{i}^{2}\right) }\right)^{q/2}\right). \end{align} | (11) |
Lemma 9 (Adler and Rosalsky, Adler et al.[19–20]) Assume that \{X_{ni}; i{\geqslant}1, n{\geqslant}1\} be an array of random variables which is stochastically dominated by a random variable X . For all u>0 and x>0 , the following statements hold:
\begin{equation} {\sf E}\left( {\left| {{X}_{ni}} \right|}^{u}\right) I\left( \left| {{X}_{ni}} \right|{\leqslant} x \right){\leqslant} C\left( {\sf E}\left( {\left| X \right|}^{u}\right) I\left( \left| X \right|{\leqslant} x \right)+{{x}^{u}} {\sf P}\left( \left| X \right|>x \right) \right), \end{equation} | (12) |
\begin{equation} {\sf E}\left( {\left| {{X}_{ni}} \right|}^{u}\right) I\left( \left| {{X}_{ni}} \right|>x \right){\leqslant} C {\sf E}\left( {\left| X \right|}^{u}\right) I\left( \left| X \right|>x \right). \end{equation} | (13) |
Lemma 10 (Bai and Su[21]) If h(x)>0 is a slowly varying function, then
(ⅰ) \lim_{x\rightarrow \infty}\frac{h\left(ux\right)}{h\left(x\right)}=1 for each u>0 , \lim_{x\rightarrow \infty}\frac{h\left(x+t\right)}{h\left(x\right)}=1 for each t{\geqslant}0 ;
(ⅱ) \lim_{k\rightarrow \infty}\sup_{2^{k}{\leqslant} x<2^{k+1}}\frac{h\left(x\right)}{h(2^{k})}=1 ;
(ⅲ) \lim_{x\rightarrow \infty}x^{\delta}h(x)=\infty , \lim_{x\rightarrow \infty}x^{-\delta}h(x)=0 for each \delta>0 ;
(ⅳ) C_{1}2^{mr}h(\varepsilon 2^{m}){\leqslant} \sum_{j=1}^{m}2^{jr}h(\varepsilon 2^{j}){\leqslant} C_{2}2^{mr}h(\varepsilon 2^{m}) for every r>0 , \varepsilon>0 , positive integer m and some C_{1}>0 , C_{2}>0 ;
(ⅴ) C_{3}2^{mr}h(\varepsilon 2^{m}){\leqslant} \sum_{j=1}^{m}2^{jr}h(\varepsilon 2^{j}){\leqslant} C_{4}2^{mr}h(\varepsilon 2^{m}) for every r<0 , \varepsilon>0 , positive integer m and some C_{3}>0 , C_{4}>0 .
Proof of Theorem 5 Without loss of generality, suppose that {{a}_{ni}}{\geqslant} 0 (otherwise, we use a_{ni}^{+} and a_{ni}^{-} instead of {{a}_{ni}} , and note that {{a}_{ni}}=a_{ni}^{+}-a_{ni}^{-} ). For fixed n{\geqslant} 1 , we define
\begin{equation*} {{Y}_{ni}}=-n^{\alpha}I\left( {{X}_{ni}}<-{n^{\alpha}} \right)+{{X}_{ni}}I\left( \left| {{X}_{ni}} \right|{\leqslant} n^{\alpha} \right)\text{+}n^{\alpha}I\left( {{X}_{ni}}>{n^{\alpha}} \right), i{\geqslant} 1. \end{equation*} |
It is easy to check that for all \varepsilon >0 ,
\begin{equation*} { {\sf E}_{ni}}=\left(\max\limits_{1{\leqslant} j{\leqslant} n}\left|\sum\limits_{i=1}^{j}a_{ni}X_{ni}\right|>\varepsilon n^{\alpha}\right) \subset \left(\max\limits_{1{\leqslant} j{\leqslant} n}\left|\sum\limits_{i=1}^{j}a_{ni}Y_{ni}\right|>\varepsilon n^{\alpha}\right) \cup\left(\bigcup\limits_{i=1}^{n}\left(\left|X_{ni}\right|>n^{\alpha}\right)\right), \end{equation*} |
which implies that
\begin{align} {\sf P}\left( {\sf E}_{ni}\right) &{\leqslant} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{a_{ni}{Y}_{ni}}} \right|>\varepsilon {n^{\alpha}} \right) + {\sf P}\left( \bigcup\limits_{i=1}^{n}{\left( \left| {{X}_{ni}} \right|>{n^{\alpha}} \right)} \right) \\ {\leqslant}& {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{ a_{ni}\left({Y}_{ni}- {\sf E}({Y}_{ni})\right) } \right|>\varepsilon {n^{\alpha}}-\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{ {\sf E}(a_{ni}{Y}_{ni})} \right| \right)\\ &+ \sum\limits_{i=1}^{n}{ {\sf P}\left( \left| {{X}_{ni}} \right|>{n^{\alpha}} \right)}. \end{align} | (14) |
It follows from (4) and the Hölder inequality that
\begin{equation} \sum\limits_{i=1}^{n}{\left|{a_{ni}}\right|}{\leqslant} \left(n\sum\limits_{i=1}^{n}a_{ni}^{2}\right)^{1/2}{\leqslant} Cn. \end{equation} | (15) |
Firstly, we show that
\begin{align} \frac{1}{{n^{\alpha}}}\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{ {\sf E}({a}_{ni}{{Y}_{ni}})} \right|\to 0, \quad \text{ as }n\to \infty. \end{align} | (16) |
Since h(x)>0 is a monotone non-decreasing function, we have
\begin{align} \left|X\right|^{r} &= \left|X\right|^{r}I\left(\left|X\right|{\leqslant}1\right) +\left|X\right|^{r}h\left(\left|X\right|^{1/\alpha}\right) \frac{1}{h\left(\left|X\right|^{1/\alpha}\right)}I\left(\left|X\right|>1\right)\\ &{\leqslant} 1+\left|X\right|^{r}h\left(\left|X\right|^{1/\alpha}\right)\frac{1}{h(1)}, \end{align} | (17) |
which together with (5) yields that {\sf E}\left( \left|X\right|^{r}\right) <\infty .
We consider the following two cases:
(ⅰ) If 0<\alpha{\leqslant}1 , then r>1 . By {\sf E}(X_{ni})=0 , Lemma 9 and {\sf E}\left( \left|X\right|^{r}\right) <\infty , we get
\begin{align} \frac{1}{{n^{\alpha}}}\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{ {\sf E}({a}_{ni}{{Y}_{ni}})} \right| &{\leqslant} C\frac{1}{{n^{\alpha}}}\sum\limits_{i=1}^{n}{\left| {\sf E}({a}_{ni}{{Y}_{ni}}) \right|} \\ &{\leqslant} C{n^{-\alpha}}\sum\limits_{i=1}^{n}{ {\sf E}\left( \left| {{a}_{ni}}{{X}_{ni}} \right|\right) I\left( \left| {{X}_{ni}} \right|{\leqslant} {n^{\alpha}} \right)}+C\sum\limits_{i=1}^{n}\left|{{a}_{ni}}\right|{ {\sf P}\left( \left| {{X}_{ni}} \right|>{n^{\alpha}} \right)} \\ &{\leqslant} C{n^{-\alpha}}\sum\limits_{i=1}^{n}{ {\sf E}\left( \left| {{a}_{ni}}{{X}_{ni}} \right|\right) I\left( \left| {{X}_{ni}} \right|> {n^{\alpha}} \right)}+C\sum\limits_{i=1}^{n}\left|{{a}_{ni}}\right|{ {\sf P}\left( \left| {{X}_{ni}} \right|>{n^{\alpha}} \right)} \\ &{\leqslant} C{n^{-\alpha}}\sum\limits_{i=1}^{n}{ {\sf E}\left( \left| {{a}_{ni}}{{X}} \right|\right) I\left( \left| {{X}} \right|> {n^{\alpha}} \right)}+C\sum\limits_{i=1}^{n}\left|{{a}_{ni}}\right|{ {\sf P}\left( \left| {{X}} \right|>{n^{\alpha}} \right)} \\ &{\leqslant} C{n^{1-\alpha}}{ {\sf E}\left( \left| {{X}} \right|\right) I\left( \left| {{X}} \right|> {n^{\alpha}} \right)} \\ {\leqslant}& Cn^{1-\alpha r} {\sf E}\left( \left|X\right|^{r}\right) \to 0, \quad \text{ as }n\to \infty. \end{align} | (18) |
(ⅱ) If \alpha>1 , then, by Lemma 9 again, we get
\begin{align} &\frac{1}{{n^{\alpha}}}\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{ {\sf E}({a}_{ni}{{Y}_{ni}})} \right| \end{align} | (19) |
\begin{align} {\leqslant}& C\frac{1}{{n^{\alpha}}}\sum\limits_{i=1}^{n}{\left| {\sf E}({a}_{ni}{{Y}_{ni}}) \right|} \\ {\leqslant}& C{n^{-\alpha}}\sum\limits_{i=1}^{n}{ {\sf E}\left( \left| {{a}_{ni}}{{X}_{ni}} \right|\right) I\left( \left| {{X}_{ni}} \right|{\leqslant} {n^{\alpha}} \right)}+C\sum\limits_{i=1}^{n}\left|{{a}_{ni}}\right|{ {\sf P}\left( \left| {{X}_{ni}} \right|>{n^{\alpha}} \right)} \\ {\leqslant}& C{n^{-\alpha}}\sum\limits_{i=1}^{n}\left[{ {\sf E}\left( \left| {{a}_{ni}}{{X}} \right|\right) I\left( \left| {{X}} \right|{\leqslant} {n^{\alpha}} \right)}+{n^{\alpha}}{ {\sf P}\left( \left| {X} \right|>{n^{\alpha}} \right)}\right] \\ &+ C\sum\limits_{i=1}^{n}\left|{{a}_{ni}}\right|{ {\sf P}\left( \left| {X} \right|>{n^{\alpha}} \right)} \\ {\leqslant}& C{n^{1-\alpha}}{ {\sf E}\left( \left| {{X}} \right|\right) I\left( \left| {{X}} \right|{\leqslant} {n^{\alpha}} \right)} +C{n^{1-\alpha r}} {\sf E}\left( \left|X\right|^{r} \right) \\ {\leqslant}& Cn^{1-\alpha}\sum\limits_{m=1}^{n} {\sf E}\left( \left|X\right|\right) I\left(\left(m-1\right)^{\alpha}<\left|X\right|{\leqslant} m^{\alpha}\right)+C{n^{1-\alpha r}}. \end{align} | (20) |
By (17) and the Markov inequality, we have
\begin{align} \sum\limits_{m=1}^{\infty}m^{1-\alpha} {\sf E}\left( \left|X\right|\right) I\left(\left(m-1\right)^{\alpha}<\left|X\right|{\leqslant} m^{\alpha}\right) &{\leqslant} \sum\limits_{m=1}^{\infty}m {\sf P}\left(\left(m-1\right)^{\alpha}<\left|X\right|{\leqslant} m^{\alpha}\right) \\ &= \sum\limits_{j=0}^{\infty} {\sf P}\left(\left|X\right|>j^{\alpha}\right) \\ &{\leqslant} 1+\sum\limits_{j=1}^{\infty}\frac{ {\sf E}\left( \left|X\right|^{r}\right) }{j^{\alpha r}}<\infty, \end{align} | (21) |
which together with the Kronecker lemma yields that
\begin{equation} n^{1-\alpha}\sum\limits_{m=1}^{n} {\sf E}\left( \left|X\right|\right) I\left(\left(m-1\right)^{\alpha}<\left|X\right|{\leqslant} m^{\alpha}\right)\to 0, \quad \text{ as }n\to \infty. \end{equation} | (22) |
By (19)–(22), we obtain (16) immediately. Hence, for n large enough,
\begin{align} {\sf P}\left( {{ {\sf E}}_{ni}} \right){\leqslant} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}\left( {{Y}_{ni}}- {\sf E}({Y}_{ni}) \right)} \right|>\frac{\varepsilon {n^{\alpha}}}{2} \right)+\sum\limits_{i=1}^{n}{ {\sf P}\left( \left| {{X}_{ni}} \right|>{n^{\alpha}} \right)}. \end{align} | (23) |
To prove (8), it suffices to show that
\begin{align} I\triangleq \sum\limits_{n=1}^{\infty }{n^{\alpha r-2}h\left(n\right) {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}\left( {{Y}_{ni}}- {\sf E}({Y}_{ni}) \right)} \right|>\frac{\varepsilon {n^{\alpha}}}{2} \right)}<\infty, \end{align} | (24) |
and
\begin{equation} J\triangleq \sum\limits_{n=1}^{\infty }{n^{\alpha r-2}h\left(n\right)\sum\limits_{i=1}^{n}{ {\sf P}\left( \left| {{X}_{ni}} \right|>{n^{\alpha}} \right)}}<\infty. \end{equation} | (25) |
By Lemma 7, it obviously follows that \left\{ {{Y}_{ni}}- {\sf E}({Y}_{ni}), i{\geqslant} 1, n{\geqslant} 1 \right\} is still an array of rowwise ANA random variables. Hence, it follows from the Markov inequality, Lemma 8 (for q=2 ) and (4) that
\begin{align} I&{\leqslant} C\sum\limits_{n=1}^{\infty }{n^{\alpha r-2-2\alpha}h\left(n\right)} {\sf E}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}\left( {{Y}_{ni}}- {\sf E}({Y}_{ni}) \right)} \right|^{2}\right) \\ &{\leqslant} C\sum\limits_{n=1}^{\infty }{n^{\alpha r-2-2\alpha}h\left(n\right)}\sum\limits_{i=1}^{n} {\sf E}\left( \left|{{{a}_{ni}}\left( {{Y}_{ni}}- {\sf E}({Y}_{ni}) \right)}\right|^{2}\right) \\ &{\leqslant} C\sum\limits_{n=1}^{\infty }{n^{\alpha r-2-2\alpha}h\left(n\right)}\sum\limits_{i=1}^{n}{{a}_{ni}^{2}} {\sf E}\left( \left|{ {{Y}_{ni}} }\right|^{2}\right) \\ &{\leqslant} C\sum\limits_{n=1}^{\infty }{n^{\alpha r-1-2\alpha}h\left(n\right)} {\sf E}\left( X^{2}\right) I\left(\left|X\right|{\leqslant} {n^{\alpha}}\right)+C\sum\limits_{n=1}^{\infty }{n^{\alpha r-1}h\left(n\right)} {\sf P}\left(\left|X\right|>{n^{\alpha}}\right) \\ &\triangleq {{I}_{1}}+{{I}_{2}}. \end{align} | (26) |
By Lemma 10 and (5), we get
\begin{align} {{I}_{1}}&= C\sum\limits_{n=1}^{\infty }{n^{\alpha r-1-2\alpha}h\left(n\right)} {\sf E}\left( X^{2}\right)I\left(\left|X\right|{\leqslant} {n^{\alpha}}\right) \\ &= C\sum\limits_{j=1}^{\infty }\sum\limits_{2^{j}{\leqslant} n<2^{j+1}}n^{\alpha r-1-2\alpha}h\left(n\right) {\sf E}\left( X^{2}\right)I\left(\left|X\right|{\leqslant} {n^{\alpha}}\right) \\ &{\leqslant} C\sum\limits_{j=1}^{\infty }2^{\left(\alpha r-1-2\alpha\right)j}2^{j}h(2^{j}) {\sf E}\left( X^{2}\right)I(\left|X\right|{\leqslant} {2^{\alpha\left(j+1\right)}}) \\ &{\leqslant} C\sum\limits_{j=1}^{\infty }2^{\alpha\left(r-2\right)j}h(2^{j})\sum\limits_{m=1}^{j} {\sf E}\left( X^{2}\right)I(2^{\alpha m}<\left|X\right|{\leqslant} 2^{\alpha\left(m+1\right)}) \\ &{\leqslant} C\sum\limits_{m=1}^{\infty } {\sf E}\left( X^{2}\right)I(2^{\alpha m}<\left|X\right|{\leqslant} 2^{\alpha\left(m+1\right)})\sum\limits_{j=m}^{\infty}2^{\alpha\left(r-2\right)j}h(2^{j}) \\ &{\leqslant} C\sum\limits_{m=1}^{\infty }2^{\alpha\left(r-2\right)m}h(2^{m}) {\sf E}\left( X^{2}\right)I(2^{\alpha m}<\left|X\right|{\leqslant} 2^{\alpha\left(m+1\right)}) \\ &{\leqslant} C {\sf E}\left( \left|X\right|^{\frac{\alpha\left(r-2\right)}{\alpha}+2}\right) h(\left|X\right|^{1/\alpha}) \\ &= C {\sf E}\left( \left|X\right|^{r}\right) h(\left|X\right|^{1/\alpha})<\infty. \end{align} | (27) |
By Lemma 10 and (5) again, we get
\begin{align} {{I}_{2}}&= C\sum\limits_{n=1}^{\infty }{n^{\alpha r-1}h\left(n\right)} {\sf P}\left(\left|X\right|>{n^{\alpha}}\right) \\ &= C\sum\limits_{j=1}^{\infty }\sum\limits_{2^{j}{\leqslant} n<2^{j+1}}n^{\alpha r-1}h\left(n\right) {\sf P}\left(\left|X\right|>{n^{\alpha}}\right) \\ &{\leqslant} C\sum\limits_{j=1}^{\infty }2^{\left(\alpha r-1\right)j}2^{j}h(2^{j}) {\sf P}\left(\left|X\right|>{2^{\alpha j}}\right) \\ &= C\sum\limits_{j=1}^{\infty }2^{\alpha rj}h(2^{j})\sum\limits_{m=j}^{\infty} {\sf P}\left(2^{\alpha m}<\left|X\right|{\leqslant} 2^{\alpha\left(m+1\right)}\right) \\ &= C\sum\limits_{m=1}^{\infty }\sum\limits_{j=1}^{m}2^{\alpha rj}h(2^{j}) {\sf P}\left(2^{\alpha m}<\left|X\right|{\leqslant} 2^{\alpha\left(m+1\right)}\right) \\ &{\leqslant} C\sum\limits_{m=1}^{\infty }2^{\alpha rm}h(2^{m}) {\sf P}\left(2^{\alpha m}<\left|X\right|{\leqslant} 2^{\alpha\left(m+1\right)}\right) \\ &{\leqslant} C {\sf E}\left( \left|X\right|^{r}\right) h(\left|X\right|^{1/\alpha})<\infty. \end{align} | (28) |
Similar to the proof of {{I}_{2}}<\infty , we can see that
\begin{equation} {J} {\leqslant} C\sum\limits_{n=1}^{\infty }{n^{\alpha r-1}h\left(n\right)} {\sf P}\left(\left|X\right|>{n^{\alpha}}\right)<\infty. \end{equation} | (29) |
This completes the proof of Theorem 5.
Proof of Theorem 6 For \forall \varepsilon >0 , there is
\begin{align} &{\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n) {\sf E}\left(\max\limits_{1{\leqslant} j{\leqslant} n}\left|\sum\limits_{i=1}^j a_{ni}X_{ni}\right|-\varepsilon n^{\alpha}\right)^{+}}\\ =& {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{0}^{\infty }{ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|-\varepsilon n^{\alpha}>x \right) \mathrm{d} x}} \\ =& {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{0}^{n^{\alpha}}{ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>\varepsilon n^{\alpha}+x \right) \mathrm{d} x}} \\ &+ {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>\varepsilon n^{\alpha}+x \right) \mathrm{d} x}} \\ {\leqslant}& {\sum\limits_{n=1}^\infty n^{\alpha r-2}h(n){ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>\varepsilon n^{\alpha}\right)}} \\ &+ {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>x \right) \mathrm{d} x}} \\ \triangleq& {{H}_{1}}+{{H}_{2}}. \end{align} | (30) |
To prove Theorem 6, it suffices to show {{H}_{1}}<\infty and {{H}_{2}}<\infty . Actually, by the proof of Theorem 5, we can directly obtain H_{1}<\infty . By applying similar notations and the methods of Theorem 5, for fixed n{\geqslant} 1 , i{\geqslant} 1 and all x{\geqslant} {n^{\alpha}} , we define
\begin{equation*} Y_{ni}^{\prime}=-xI\left( {{X}_{ni}}<-x \right)+{{X}_{ni}}I\left( \left| {{X}_{ni}} \right|{\leqslant} x \right)\text{+}xI\left( {{X}_{ni}}>x \right); \end{equation*} |
\begin{equation*} Z_{ni}={{X}_{ni}}-Y_{ni}^{\prime}= \left( {{X}_{ni}}+{x} \right)I\left( {{X}_{ni}}<-x \right)+\left( {{X}_{ni}}-{x} \right)I\left( {{X}_{ni}}>x \right); \end{equation*} |
\begin{align*} {\sf E}_{ni}^{\prime}=\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>{x} \right). \end{align*} |
It is easy to check that
\begin{align} {\sf P}\left( {\sf E}_{ni}^{\prime} \right){\leqslant}& {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}Z_{ni}} \right|>{x/3} \right) + {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni} {\sf E}\left( Y_{ni}^{\prime}\right) } \right|>{x/3} \right) \\ &+ {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}\left(Y_{ni}^{\prime}- {\sf E}\left( Y_{ni}^{\prime}\right) \right)} \right|>{x/3} \right), \end{align} | (31) |
which implies that
\begin{align} {H_{2}} =& {\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{ {\sf P}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{ni}}} \right|>x \right) \mathrm{d} x}} \\ {\leqslant}& \sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}Z_{ni}} \right|>{x/3} \right) \mathrm{d} x \\ &+\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni} {\sf E}\left( Y_{ni}^{\prime}\right) } \right|>{x/3} \right) \mathrm{d} x \\ &+\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}\left(Y_{ni}^{\prime}- {\sf E}\left( Y_{ni}^{\prime}\right) \right)} \right|>{x/3} \right) \mathrm{d} x \\ \triangleq& {H_{21}}+{H_{22}}+{H_{23}}. \end{align} | (32) |
Note that \left|Z_{ni}\right|{\leqslant} \left|X_{ni}\right|I\left(\left|X_{ni}\right|>x\right) , it follows from the Markov inequality, Lemma 10 and (5) that
\begin{align} {H_{21}} &= \sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}Z_{ni}} \right|>{x/3} \right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}x^{-1} {\sf E}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}Z_{ni}} \right|\right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}x^{-1}\sum\limits_{i=1}^{n} {\sf E}\left( \left|{a_{ni}Z_{ni}}\right|\right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}x^{-1} {\sf E}\left( \left|{Z_{ni}}\right|\right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}x^{-1} {\sf E}\left( \left|{X_{ni}}\right|\right) I\left(\left|{X_{ni}}\right|>x\right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}x^{-1} {\sf E}\left( \left|{X}\right|\right) I\left(\left|{X}\right|>x\right) \mathrm{d} x \\ &= C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n){\sum\limits_{k=n}^{\infty}}\int_{k^{\alpha}}^{\left(k+1\right)^{\alpha}} x^{-1} {\sf E}\left( \left|{X}\right|\right) I\left(\left|{X}\right|>x\right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n){\sum\limits_{k=n}^{\infty}}k^{-1} {\sf E}\left( \left|{X}\right|\right) I\left(\left|{X}\right|>k^{\alpha}\right) \mathrm{d} x \\ &= C\sum\limits_{k=1}^\infty k^{-1} {\sf E}\left( \left|{X}\right|\right) I\left(\left|{X}\right|>k^{\alpha}\right)\sum\limits_{n=1}^{k}n^{\alpha r-1-\alpha}h(n)\\ &{\leqslant} C\sum\limits_{k=1}^\infty k^{-1} {\sf E}\left( \left|{X}\right|\right) I\left(\left|{X}\right|>k^{\alpha}\right)k^{\alpha r-\alpha}h(k)\\ &= C\sum\limits_{n=1}^\infty n^{\alpha r-\alpha-1}h(n) {\sf E}\left( \left|{X}\right|\right) I\left(\left|{X}\right|>n^{\alpha}\right) \\ &= C\sum\limits_{n=1}^\infty n^{\alpha r-\alpha-1}h(n){\sum\limits_{k=n}^{\infty}} {\sf E}\left( \left|{X}\right|\right) I\left({k^{\alpha}}<\left|X\right|{\leqslant}\left(k+1\right)^{\alpha}\right) \\ &= C\sum\limits_{k=1}^\infty {\sf E}\left( \left|{X}\right|\right) I\left({k^{\alpha}}<\left|X\right|{\leqslant}\left(k+1\right)^{\alpha}\right) \sum\limits_{n=1}^{k}n^{\alpha r-\alpha-1}h(n) \\ &{\leqslant} C\sum\limits_{k=1}^\infty k^{\alpha\left(r-1\right)}h(k) {\sf E}\left( \left|{X}\right|\right) I\left({k^{\alpha}}<\left|X\right|{\leqslant}\left(k+1\right)^{\alpha}\right) \\ &{\leqslant} C {\sf E}\left( \left|{X}\right|^{\frac{\alpha\left(r-1\right)}{\alpha}+1}\right) h(\left|X\right|^{1/\alpha}) \\ &= C {\sf E}\left( \left|X\right|^{r}\right) h(\left|X\right|^{1/\alpha})<\infty. \end{align} | (33) |
By the Markov inequality and (33), we get
\begin{align} {H_{22}} &= \sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni} {\sf E}\left( Y_{ni}^{\prime}\right) } \right|>{x/3} \right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-1}} {\sf E}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni} {\sf E}\left( Y_{ni}^{\prime}\right)} \right|\right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty} {x^{-1}}{\sum\limits_{i=1}^{n}}\left|{a_{ni} {\sf E}\left( Y_{ni}^{\prime}\right)}\right| \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-1}}{\sum\limits_{i=1}^{n}} {\sf E}\left( \left|a_{ni}{X_{ni}}\right|\right) I\left(\left|{X_{ni}}\right|>x\right) \mathrm{d} x \\ &{\leqslant} C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-1}} {\sf E}\left( \left|{X}\right|\right) I\left(\left|{X}\right|>x\right) \mathrm{d} x<\infty. \end{align} | (34) |
By the Markov inequality, Lemma 8 (for q=2 ), (33) and (27), we get
\begin{align} {{H}_{23}}=& \sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty} {\sf P}\left( \underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}\left(Y_{ni}^{\prime}- {\sf E}\left( Y_{ni}^{\prime}\right)\right)} \right|>{x/3} \right) \mathrm{d} x \\ {\leqslant}& C\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-2}} {\sf E}\left(\underset{1{\leqslant} j{\leqslant} n}{\mathop{\max }}\, \left| \sum\limits_{i=1}^{j}{a_{ni}\left(Y_{ni}^{\prime}- {\sf E}\left( Y_{ni}^{\prime}\right)\right)} \right|^{2}\right) \mathrm{d} x \\ {\leqslant}& C\sum\limits_{n=1}^\infty n^{\alpha r-2-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-2}}{\sum\limits_{i=1}^{n}}a_{ni}^{2} {\sf E}\left( \left|Y_{ni}^{\prime}- {\sf E}\left( Y_{ni}^{\prime}\right)\right|^{2}\right) \mathrm{d} x \\ {\leqslant}& C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-2}} {\sf E}\left( Y_{ni}^{\prime}\right)^{2} \mathrm{d} x \\ {\leqslant}& C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-2}} \left[ {\sf E}\left( X_{ni}^{2}\right) I\left(\left|X_{ni}\right|{\leqslant} x\right)+x^{2} {\sf P}\left(\left|X_{ni}\right|>x\right)\right] \mathrm{d} x \\ {\leqslant}& C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-2}} {\sf E}\left( X^{2}\right) I\left(\left|X\right|{\leqslant} x\right) \mathrm{d} x \\ &+ C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n)\int_{n^{\alpha}}^{\infty}{x^{-1}} {\sf E}\left( \left|X\right|\right) I\left(\left|X\right|> x\right) \mathrm{d} x \\ {\leqslant}& C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n){\sum\limits_{k=n}^{\infty}}\int_{k^{\alpha}}^{\left(k+1\right)^{\alpha}} {x^{-2}} {\sf E}\left( X^{2}\right)I\left(\left|X\right|{\leqslant} x\right) \mathrm{d} x+C \\ {\leqslant}& C\sum\limits_{n=1}^\infty n^{\alpha r-1-\alpha}h(n){\sum\limits_{k=n}^{\infty}}{k^{-\alpha-1}} {\sf E}\left( X^{2}\right)I\left(\left|X\right|{\leqslant} {\left(k+1\right)^{\alpha}}\right) \\ =& C\sum\limits_{k=1}^\infty{k^{-\alpha-1}} {\sf E}\left( X^{2}\right)I\left(\left|X\right|{\leqslant} {\left(k+1\right)^{\alpha}}\right) {\sum\limits_{n=1}^{k}}n^{\alpha r-1-\alpha}h(n) \\ {\leqslant}& C\sum\limits_{k=1}^\infty{k^{-\alpha-1}} {\sf E}\left( X^{2}\right)I\left(\left|X\right|{\leqslant} {\left(k+1\right)^{\alpha}}\right) k^{\alpha r-\alpha}h(k) \\ =& C\sum\limits_{k=1}^\infty{k^{\alpha r-2\alpha-1}}h(k) {\sf E}\left( X^{2}\right) I\left(\left|X\right|{\leqslant} {\left(k+1\right)^{\alpha}}\right) \\ =& C\sum\limits_{n=1}^\infty{n^{\alpha r-2\alpha-1}}h(n) {\sf E}\left( X^{2}\right) I\left(\left|X\right|{\leqslant} {\left(n+1\right)^{\alpha}}\right) \\ =& C\sum\limits_{n=1}^\infty{n^{\alpha r-2\alpha-1}}h(n) {\sf E}\left( X^{2}\right) I\left(n^{\alpha}<\left|X\right|{\leqslant} {\left(n+1\right)^{\alpha}}\right) \\ &+ C\sum\limits_{n=1}^\infty{n^{\alpha r-2\alpha-1}}h(n) {\sf E}\left( X^{2}\right) I\left(\left|X\right|{\leqslant} n^{\alpha}\right) \\ {\leqslant}& C\sum\limits_{n=1}^\infty{n^{\alpha r-2\alpha}}h(n) {\sf E}\left( X^{2}\right) I\left(n^{\alpha}<\left|X\right|{\leqslant} {\left(n+1\right)^{\alpha}}\right)+C \\ {\leqslant}& C {\sf E}\left( \left|X\right|^{r}\right) h(\left|X\right|^{1/\alpha})+C<\infty. \end{align} | (35) |
This completes the proof of Theorem 6.
[1] |
AN J, YUAN D M. Complete convergence of weighted sums for ρ*-mixing sequence of random variables[J]. Stat Probabil Lett, 2008, 78(12): 1466–1472. doi: 10.1016/j.spl.2007.12.020
|
[2] |
SHEN A T, XUE M X, WANG W J. Complete convergence for weighted sums of extended negatively dependent random variables[J]. Commun Stat-theor M, 2017, 46(3): 1433–1444. doi: 10.1080/03610926.2015.1019147
|
[3] |
HSU P L, ROBBINS H. Complete convergence and the law of large numbers[J]. Proceedings of the national academy of sciences, 1947, 33(2): 25–31. doi: 10.1073/pnas.33.2.25
|
[4] |
ERDÖS P. On a theorem of Hsu and Robbins[J]. Ann Math Stat, 1949, 20(2): 286–291. doi: 10.1214/aoms/1177730037
|
[5] |
BAUM L E, KATZ M. Convergence rates in the law of large numbers[J]. Trans Amer Math Soc, 1965, 120(1): 108–123. doi: 10.1090/S0002-9947-1965-0198524-1
|
[6] |
ZHANG L X, WANG X Y. Convergence rates in the strong laws of asymptotically negatively associated random fields[J]. Appl Math, 1999, 14(4): 406–416. doi: 10.1007/s11766-999-0070-6
|
[7] |
ZHANG L X. A functional central limit theorem for asymptotically negatively dependent random fields[J]. Acta Math Hung, 2000, 86(3): 237–259. doi: 10.1023/A:1006720512467
|
[8] |
ZHANG L X. Central limit theorems for asymptotically negatively associated random fields[J]. Acta Math Sin, 2000, 16(4): 691–710. doi: 10.1007/s101140000084
|
[9] |
WANG J F, LU F B. Inequalities of maximum of partial sums and weak convergence for a class of weak dependent random variables[J]. Acta Math Sin, 2006, 22(3): 693–700. doi: 10.1007/s10114-005-0601-x
|
[10] |
WANG J F, ZHANG L X. A Berry-Esseen theorem and a law of the iterated logarithm for asymptotically negatively associated sequences[J]. Acta Math Sin, 2007, 23(1): 127–136. doi: 10.1007/s10114-005-0800-5
|
[11] |
BUDSABA K, CHEN P Y, VOLODIN A. Limiting behavior of moving average processes based on a sequence of ρ-mixing random variables[J]. Thail Statist, 2007, 5: 69–80.
|
[12] |
LIU X D, LIU J X. Moments of the maximum of normed partial sums of ρ–mixing random variables[J]. Appl Math, 2009, 24(3): 355–360. doi: 10.1007/s11766-009-1971-0
|
[13] |
YUAN D M, WU X S. Limiting behavior of the maximum of the partial sum for asymptotically negatively associated random variables under residual Cesàro alpha-integrability assumption[J]. J Stat Plan Infer, 2010, 140(9): 2395–2402. doi: 10.1016/j.jspi.2010.02.011
|
[14] |
TAN X L, ZHANG Y, ZHANG Y. An almost sure central limit theorem of products of partial sums for ρ–mixing sequences[J]. J Inequal Appl, 2012, 2012: 1–13. doi: 10.1186/1029-242X-2012-1
|
[15] |
KO M H. The Hájek-Rényi inequality and strong law of large numbers for ANA random variables[J]. J Inequal Appl, 2014, 2014: 1–9. doi: 10.1186/1029-242X-2014-1
|
[16] |
ZHANG Y. Complete moment convergence for moving average process generated by ρ--mixing random variables[J]. J Inequal Appl, 2015, 2015: 1–13. doi: 10.1186/1029-242X-2015-1
|
[17] |
HUANG H W, PENG J Y, WU X T, et al. Complete convergence and complete moment convergence for arrays of rowwise ANA random variables[J]. J Inequal Appl, 2016, 2016: 1–13. doi: 10.1186/s13660-015-0952-5
|
[18] |
CHOW Y S. On the rate of moment convergence of sample sums and extremes[J]. Bull Inst Math Acad Sin, 1988, 16(3): 177–201.
|
[19] |
ADLER A, ROSALSKY A. Some general strong laws for weighted sums of stochastically dominated random variables[J]. Stoch Anal Appl, 1987, 5(1): 1–16. doi: 10.1080/07362998708809104
|
[20] |
ADLER A, ROSALSKY A, TAYLOR R L. Strong laws of large numbers for weighted sums of random elements in normed linear spaces[J]. Int J Math Math Sci, 1989, 12(3): 507–529. doi: 10.1155/S0161171289000657
|
[21] |
BAI Z D, SU C. The complete convergence for partial sums of i. i. d. random variables[J]. Sci China Seri A, 1985, 28: 1261–1277.
|