• No results found

A Note on the Almost Sure Central Limit Theorem in the Joint Version for the Maxima and Partial Sums of Certain Stationary Gaussian Sequences

N/A
N/A
Protected

Academic year: 2020

Share "A Note on the Almost Sure Central Limit Theorem in the Joint Version for the Maxima and Partial Sums of Certain Stationary Gaussian Sequences"

Copied!
11
0
0

Loading.... (view fulltext now)

Full text

(1)

http://dx.doi.org/10.4236/am.2014.510153

How to cite this paper: Wang, Y.F. and Wu, Q.Y. (2014) A Note on the Almost Sure Central Limit Theorem in the Joint Ver-sion for the Maxima and Partial Sums of Certain Stationary Gaussian Sequences. Applied Mathematics, 5, 1598-1608. http://dx.doi.org/10.4236/am.2014.510153

A Note on the Almost Sure Central Limit

Theorem in the Joint Version for the Maxima

and Partial Sums of Certain Stationary

Gaussian Sequences

*

Yuanfang Wang, Qunying Wu

College of Science, Guilin University of Technology, Guilin, China Email: wangyuanfang89@126.com, wqy666@glut.edu.cn

Received 29 April 2014; revised 29 May 2014; accepted 5 June 2014 Copyright © 2014 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Abstract

Considering a sequence of standardized stationary Gaussian random variables, a universal result in the almost sure central limit theorem for maxima and partial sum is established. Our result ge-neralizes and improves that on the almost sure central limit theory previously obtained by Marcin Dudzinski [1]. Our result reaches the optimal form.

Keywords

Almost Sure Central Limit Theorem, Stationary Gaussian Sequence, Slowly Varying Functions at Infinity

1. Introduction

In this paper, we let

(

X X, n n

)

be a standardized stationary Gaussian sequence, also let

1

: max ,

n i

i n

M X

≤ ≤

= ,

1

: max

m n i m i n

M X

+ ≤ ≤

= ,

1

:

n n i

i

S X

=

=

,

σ

n:= Var

( )

Sn ,

1

n n k

k

D d

=

=

,

{ }

dk

be some sequence of weights, I . and

( )

Φ

( )

. denote the indicator function and the standard normal distribu- tion function, respectively.

*Supported by the National Natural Science Foundation of China (11361019) and Innovation Project of Guangxi Graduate Education

(2)

The ASCLT has been first introduced independently by Brosamler [2] and Schatte [3] for partial sum, since then the concept has already started to receive applications in many fields. For example, Fahrner and Sadtmuller [4] and Cheng et al. [5] extended this almost sure central limit theorem for partial sums to the case of maxima of i.i.d. random variables. Under some conditions, they proved as follows:

(

)

(

)

( )

1

1 1

lim ln

n

k k k n

k

a S b x G x

n k

→∞

= Ι − ≤ = a.s.,

for all xR, where ak >0 and bkR satisfy

(

)

d

k k k

a Sb →G, where G(:) is some non-degenerate distribution function. Afterwards, Marcin Dudzinski [5] showed the ASCLT in its two-dimensional version, i.e.

(

)

( )

1

1

lim , e Φ

n

k k k k k n

k

n k

S

d I a M b x y y

D

τ

σ −

→∞ =

 

− ≤ ≤ =

 

 

a.s. for ∀x y, ∈. (1.1)

In this paper, we extend the weight dk 1 k

= to the weight

(

( )

)

exp ln

k

k d

k β

= , 0 1

2

β

≤ < .

Our purpose is to prove that if

(

X X, n n N

)

is a standardized stationary Gaussian sequence, the covariance function r t

( )

fulfills

( )

L t

( )

r t tα

= , α>0 and t=1, 2,, (1.2)

where L t

( )

is a positive slowly varying function at infinity. Moreover if the numerical sequence

{ }

un satis- fies the following relation

( )

(

)

lim 1 n

n→∞n − Φ u =τ for some 0≤ < ∞τ

, (1.3)

then we have

(

)

( )

1

1

lim , e

n

k k k k k

n k

n k

S

d I a M b x y y

D

τ σ

→∞ =

 

− ≤ ≤ = Φ

 

 

a.s. for any x y, ∈R, (1.4)

where

1

n n k

k

D d

=

=

and

(

( )

)

exp ln

k

k d

k β

= , 0 1

2

β

≤ < .

In the following, denote by anbn if n 1

n a

b → as n→ ∞, by anbn if there exists a constant c>0 such that ancbn for sufficiently large n. The c stands for a constant, which may vary from one line to another.

2. Main Results

Theorem 2.1. Assume that

(

X X, n n

)

be a standardized stationary Gaussian sequence, the covariance func- tion r t

( )

satisfies (1.2) for some α >0. If the numerical sequence

{ }

un satisfies (1.3), then (1.4) holds with

( )

(

)

exp ln

k

k d

k β

= , 0 1

2

β

≤ < .

Let

{ }

n n

D= D be a positive non-decreasing sequence with lim n

n→∞D = ∞. We say that

{ }

xk k∈ is D-

summable to a finite limit x if

1 1 lim

n

k k n

k n

d x x

D

→∞

= = , (2.1)

(3)

Remark 2.1. By a classical theorem of Harly (see Chandrasekharan and Minakshisundaram [6]), if two se- quences D and D* satisfy Dn∗=O D

( )

n , then D-summability implies the D*-summability, i.e., if a se- quence

{ }

n

n N

x is D-summable to x, then it is also *

D -summable to x. These results show that if (2.1)

holds with the weight

{ }

k k

d , then for 0 dk dk

≤ ≤ , Dn∗ → ∞, (2.1) also holds with the weight

{ }

k k d

∈. So,

if ASCLT holds with

{ }

k k

d , then for 0 dk dk

≤ ≤ , Dn∗→ ∞, ASCLT also holds with

{ }

k k d

∈. So, by this,

if we use larger weights, we should expect to get stronger results.Theorem 2.1 remains valid if we extend the

weights from dk 1 k

= to

(

( )

)

exp ln

k

k d

k β

= , 0 1

2

β

< < . When

β

=0, the weights dk 1 k = .

Lemma 2.2. Under the same assumptions as in Theorem 2.1, if the numerical sequence

{ }

un fulfills (3), then there exists some

γ

>0 and for any x y, ∈ and m<n,

,

, n , n

n n m n n

n n

S S m

E I M u y I M u y

n γ

σ

σ

  

      

     , (2.2)

,

, m , , n ,

m m m n n

m n

S S m

Cov I M u y I M u y

n γ

σ

σ

      

      

   

   (2.3)

where m m, n n.

m n

x x

u b u b

a a

= + = +

3. Proofs

3.1. Proof of Theorem 2.1

Under the assumptions of Theorem 1 on X X1, 2,, r t

( )

and

{ }

un , by Theorem 4.3.3 in Leadbetter et al. [7], we have lim

(

n n

)

e

n P M u

τ −

→∞ ≤ = for τ which is defined in (1.3). Let y be a real number, for each n≥1,

n

Y denotes a standard normal variable, which is independent of

(

X1,,Xn

)

and has the same distribution as n

n S

σ . From the proof of Lemma 2.2, we get that

(

) (

)

1

, n 0

n n n n n

n S

P M u y P M u P Y y

nγ

σ

 

≤ ≤ − ≤ ≤ →

 

   , as

n→ ∞. Thus we have

(

) (

)

( )

lim , n lim e

n n n n n

n n

n

S

P M u y P M u P Y y τ y

σ −

→∞ →∞

 

≤ ≤ = ≤ ≤ = Φ

 

  ,

by Toeplitz Lemma, we obtain

( )

1

1

lim , e

n

k k k k n

k

n k

S

d P M u y y

D

τ σ

− →∞ =

 

≤ ≤ = Φ

 

 

(3.1)

where

1

n n k

k

D d

=

=

and

(

( )

)

exp ln

k

k d

k β

= , 0 1

2

β

< < .

To prove Theorem 2.1 for 0 1 2

β

< < . We should prove the following:

1 1

, , 0

n

k k

k k k k k

k

n k k

S S

d I M u y P M u y

D =

σ

σ

    

≤ ≤ − ≤ ≤ →

    

    

 

a.s. (3.2)

(4)

[8]) for some ε>0

(

)

(1 )

2

1

Var , ln

n

k

k k k n n

k k

S

d I M u y D D ε

σ

− +

=

  

≤ ≤

  

 (3.3)

Set , k , k

k k k k k

k k

S S

I M u y P M u y

ξ

σ σ

   

= ≤ ≤ ≤ ≤

   , by Lemma 2.2, we have

(

)

,

,

, , ,

, , , ,

, , ,

,

k l

k l k k l l

k l

k l l

k k l l k l l

k l l

k l

k k k l l

k l

l

l l

l

S S

E Cov I M u y I M u y

S S S

Cov I M u y I M u y I M u y

S S

Cov I M u y I M u y

S

E I M u y

ξ ξ

σ σ

σ σ σ

σ σ

σ

    

=   ≤ ≤   ≤ ≤ 

   

 

      

  ≤ ≤   ≤ ≤ −  ≤ ≤ 

     

 

    

+   ≤ ≤   ≤ ≤ 

   

 

≤ ≤

 

 ,

, ,

, , ,

for 1 .

l k l l

l

k l

k k k l l

k l

S

I M u y

S S

Cov I M u y I M u y

k

k l l

γ

σ

σ σ

  

− ≤ ≤

  

  

    

+   ≤ ≤   ≤ ≤ 

   

 

  ≤ <

   

We know that

(

)

( )2 ( )2

2

1 1 1

1 2

1 , ln 1 , ln

Var , 2

:

n n

n n

k

k k k k k k l k l

k k k k l n

k l k l n n

k k

k l n D k l n D

l l

S

d I M u y E d d d E

k

d d d d T T

l

γ γ

γ

ξ ξ ξ

σ

− −

= = ≤ ≤ ≤

≤ ≤ ≤ ≤ ≤ ≤ ≤ >

    

    

 

  + = +

   

For Tn1, we have the following estimate:

( ) 2 ( )2

(

)

(

)

(

)

2 2

1 2 2 1

1 , ln 1 , ln

1

.

ln ln ln

γ γ

γ

ε

− −

+

≤ ≤ ≤ ≤ ≤ ≤ ≤ ≤

 

 

 

n n

n n

n k l k l

k k n n n

k l n D k l n D

l l

D D

k

T d d d d

lDDD

By the elementary calculation, it is easy to see that Dn 1

( )

lnn 1 βexp ln

(

( )

n

)

β β

∼ , lnDn

( )

lnn β,

ln lnDn ∼ln lnn.

For 0 1

2

β

< < , we have : 1 2 0 2

β ε

β −

= > and 1 1

2β = +ε. Then

( )

( )

(

)

( )

(

)

(

)

(

) (

)

(

)

(

)

2 2

2

2 1 1

1 1

1 , ln ln

2 2 2

1 1 2 1 1

2 2 2

exp ln ln ln

ln ln

ln ln

ln ln

ln

ln ln ln

γ λ

β

β β

β β

β ε

β β β

− −

− −

≤ ≤ ≤ =

≤ ≤ ≤ > ≤ <

− +

= =

n n

n

n n n

n k l k k n

k l n k

k

k l n D k l D n n

l

n n n n

n

n n n

n D D D

T d d d d D

l

D D

D D D D

D

D D D

   

.

(5)

3.2. Proof of Lemma 2.2

We first consider that (1.2) holds with some 0< <α 1. Let 1≤ ≤i n, we have

(

)

(

)

( )

1

1 1 1

1

1

1 1

0 , , 1

2 2

for some 0 1

n i n

n

i i j

j j j i

n n n

n

t

n n

S

Cov X Cov X X r i j r j i

L t tα

σ σ σ

α

σ σ

= = = +

=

   

 

< = = + − + −

     

+ < <

. (4.1)

By the application of the Karamata’s theorem (see Mielniczuk [9]), we obtain

(

)(

)

( )

1

2 1 1

2 2 2

1 2

n L n n

α σ

α α

 

∼ 

− −

  . (4.2)

From (4.1) and (4.2), for some 0< <α 1, we have

( )

( )

( )

( )

(

)

( )

(

)

1

1 2

1

1

1 1 1

1

2 2 2 2

2

1 ,

n n

i

t n

L n

L t L n n

S Cov X

t

L n n L n n n

α

α α α α

σ

− −

=

 

=

 

 

 .

Since L n

( )

is a slowly varying function at infinity, for any ε>0, L n

( )

nε. So

2 2

1 , n

i n

S n

Cov X

n n

ε

α α ε

σ −

 

=

 

  for any ε>0 and some 0< <α 1. Hence,

1

1

0 sup , n

i i n n

S Cov X

nβ σ

≤ ≤

 

<  

 

0

, as n→ ∞, for some

1

0 ,

2

β

< < (4.3)

so there exist λ, n0 such that

1

0 sup , n 1

i i n n

S

Cov X λ

σ

≤ ≤

 

< < <

  , for any n>n0. (4.4)

By (1.2), lim

( )

lim

( )

0

t t

L t r t

tα

→∞ = →∞ = , thus there exist δ , n1 such that

( )

1

0 sup 1

t n

r t δ >

< = < . (4.5)

Let y be an arbitrary real number and m<n. Suppose that Yn is a random variable, which has the same

distribution as n n S

σ , but Yn is independent of

(

X1,,Xn

)

, then we have

(

) (

)

(

)

(

)

(

)

(

)

,

,

, ,

,

1 2 3

, ,

, ,

,

,

:

n n

n n m n n

n n

n n

m n n n n

n n

n

n n n n n

n n

m n n m n n n n

m n n n n

S S

E I M u y I M u y

S S

P M u y P M u y

S

P M u y P M u P Y y

S

P M u y P M u P Y y

P M u P M u

A A A

σ σ

σ σ

σ

σ

   

≤ ≤ − ≤ ≤

   

   

   

= ≤ ≤ ≤ ≤

   

 

≤ ≤ − ≤ ≤

 

 

+ ≤ ≤ − ≤ ≤

 

+ ≤ − ≤

= + +

(6)

By Theorem 4.2.1 in Leadbetter et al. (1983) and (4.3), (4.4) and (4.5), we get that

(

)

(

)

2 2

1 2

1

1

, exp exp

2 1 2 1

n

n n n

i

i n

S u u

A A Cov X n

nβ

σ

λ

λ

=

   

 

+   ++

     

  . (4.6)

As we know

{ }

un fulfills (1.3), which implies that

2

exp 2

n n

u c u

n

 

− ∼

 

  ,

(

)

1 2 2 ln

n

un ,

(

)

( )

(1 )

2 2 1

1 1 ln exp

2 1

n n

u

n λ

λ λ

+

+

 

 

+

  ,

combine this and (4.6), we have

( )

2 1(1 )

( )

2 1(1 )

1 2 1 1

1

1 1

ln ln

1 n n

A A n

n

n n

λ λ

β β

λ λ

+ +

+ −

+ +

+ = .

We set 1 1 0

1

β λ + − >

+ , thus

1 2

1 m

A A

n n

γ

γ

 

+  

 

  for some 0 1 1

1

γ β λ < < + −

+ . (4.7) Next, we estimate A3.

(

)

(

)

(

)

( )

(

)

( )

( )

( )

3 ,

,

1 1

1 1

1 2 3

:

m n n n n

n n

m n n n n n n

i m i

n n

n n

i m i

A P M u P M u

P M u u P M u u

u u

B B B

= + =

= + =

= ≤ − ≤

≤ ≤ − Φ + ≤ − Φ

+ Φ − Φ

= + +

. (4.8)

Let δ is defined as in (4.5), set

2

α δ

α <

− such that

2

2 0

1+δ + − >α . By Theorem 4.2.1 in Leadbetter et

al. and (1.3), we have

( ) ( )

( ) ( )

( )

1 1

2 1 1

1 2 2 2

1 2

1 1 1 1

ln ln

exp 1

n n

n

t t

n L t n L n

u

B B n r t

t

n n

δ δ

α α

δ δ

δ

+ +

− + −

= + = +

 

+

+

 

   ,

since L n

( )

and lnn are slowly varying functions at infinity, we have

( )

( )

1 1

lnnL nnε for any ε>0, then

1 2

1

for some 0 m

B B

n n

γ

γ γ

 

+   >

 

  . (4.9)

Meanwhile, following from the elementary inequality xn m xn m n

, 0≤ ≤x 1. We get

3 for

m m

B m n

n n

γ  

  <

 

 . (4.10)

By (4.8), (4.9) and (4.10), if (1.2) holds with 0< <α 1, then (2.2) holds.

Provided (1.2) also holds with some α≥1, since r

( )

. is positive, we get

( )

1 2 Var

n Sn n

(7)

( )

1 1

1 2

2

0 ,

n n

i

t n

L t S

Cov X

t n

α σ

=

 

<

 

. As

( )

1

n

t L t

tα =

is a slowly varying function infinity, we get

( )

1

n

t L t

n t

ε α =

 for

any ε >0. We obtain that (4.3) and (4.4). By using Theorem 4.2.1 in Leadbetter et al. and by (4.6)-(4.10), let

1

A - A3 be replaced by C1 - C3 in (4.3), we also get C1 C2 m n

γ  

+  

 

 , let D1 - D3 be replaced by B1-

3

B in (4.8), we also obtain that

( ) ( )

1

2 1

1 2 2

1

1 1

ln exp

1

n n

t

n

u m

D D n r t n

n n

γ δ ε

δ δ

+

= +

   

+  

+  

 

   for some

γ

>0,

like (4.11), D3 m m

n n

γ  

 

 

 , we get C3 m

n γ      

 , at last , we obtain when (1.2) holds with α≥1, then (2.2)

also holds.

Next, we prove (2.3). First we consider the situation of some 0< <α 1. Let i≥ +m 1. Since L t

( )

is a posi- tive slowly varying function, then we have

(

( )

)

1 2

L mmε for any ε >0, L t

( )

tα is monotonic decreasing,

so

( )

( )

1

1

i m

t i m t

L t L t

tα tα

= −

=

, by (1.2) and (4.2), we obtain

( )

( )

( )

( )

( )

( )

( )

( )

1 1

1 1 2 2

1 1

2

1 1 1 1

1

2 2 2

2 2

2

1 1

0 ,

1

1 1

for some 0 2

i i

m i

t i m t i m m m

m t

L t L t

S Cov X

t t

L m m

L t L m m L m

t

L m m L m m m

m

m m

α α α

α

α α α α

ε

α η

σ σ

η

− −

= − = −

=

 

<

 

=

< <

 

 

 

,

then there exist numbers µ,m0, such that

1

0 sup , m 1

i i m m

S

Cov X µ

σ

≥ +

 

<  < <

  for all m>m0. By the Normal Comparison Lemma, we obtain

(

)

(

(

)

)

,

, ,

2 2 2

1 1 1

, , ,

, , , , ,

exp , exp

2 1

m n

m m m n n

m n

m n m n

m m m n n m m m n m

m n m n

m n m

m n n m

i

i j m i n

S S

Cov I M u y I M u y

S S S S

P M u y M u y P M u y P M u y

u u S u

r j i Cov X

r j i

σ σ

σ σ σ σ

σ

= = + =

    

≤ ≤ ≤ ≤

    

 

 

     

= ≤ ≤ ≤ ≤ ≤ ≤   ≤ ≤

     

+

− − +   −

+ −

 

∑ ∑

2

1

1 2 3 4

2 1 ,

, exp ,

2 1 ,

:

n i

n n

m n m n

i

i m m m m n

i m

S Cov X

S u S S

Cov X Cov

S Cov X

E E E E

σ

σ σ σ

σ

= +

 

 

 

+

 

 

 

 

     

+ +

  

  +  

  

 

 

= + + +

(8)

By (4.5), set 1, 2 α δ α < <

− then

2

2 0

1+δ + − >α , we have

(

)

(

(

)

)

(

)

( )

(

)

( )

(

)

( )

( )

2 2 2 2

1

1 1 1 1

2 2 2 2

1 1 1 1 1 1 1 2 2 1 exp exp 2 1 2 1 exp exp

2 1 2 1

1

m n m n i

m n m n

i j m i t m i

n i n

m n m n

t m i t

u u u u

E r j i r t

r j i

L t L t

u u u u

m m t t m m L n n n n α α δ α δ δ δ δ − = = + = = + − − = + − = − − + − + +  +   +  = − − −  + + −    +   +  − −      +   +                 

∑ ∑

∑ ∑

  

  1 m for some 0 1

n γ δ γ +   < <      .

By (4.4), set 1

1 β λ

β < <

− , then

1

1 0

1

β

λ − + − <

+ , we have

(

)

(

)

( )

(

)

( ) 2 2 1 1 1

2 1 1

2 1

1

1 1

1

1 2 1

, exp

2 1 ,

exp , ln

2 1

1

ln for 0

2

m

n m

i

i n n

i n m m n i i n S u

E Cov X

S Cov X

u S m

Cov X m

n

m m m

m m n n n λ λ β β γ β β λ λ β σ σ λ σ γ β = − + + = − + − + +         =      +              −      +  

    < < <

       

     . and

(

)

(

)

( )

( )

( )

1 1 1 1 1

1 1 1 1 1

1 1

,

1 1

n n m m n

m i

i m m mi m j m i j m

m n i m n n

i k m i i k k

m m m

S

Cov X r i j r j i

L k m

r k r k

kα

σ σ σ

σ σ σ

= + = + = = = + − = = + − = = =   = − = −     =

∑ ∑

∑ ∑

∑ ∑

∑∑

(

)

(

)

( )

( ) ( )

(

)

( )

( ) 2 3 1 2 1 1 1 2

2 2 1 1 1 1 2 2 exp , 2 1 exp 2 1 1 ln for 0 2 n n m i i m m

n n

k m

u S

E Cov X

L k

u m

k

m

L n L m n

n n m m n n α α µ α µ α γ µ σ µ σ α γ = + = + + − +     −      +     −    +         

    < <

       

     .

2 2 2 2

4

2 2

1 2 3

,

:

m n m m m n n m m

m n m n n

m n n m m

m n n

S S S S S S S

E Cov E E

S S S S

E E

F F F

σ σ σ σ σ

σ σ σ

+ + +  −     −  + −                −  −       = + + 

(9)

SinceL t

( )

tα is monotonic decreasing, so for 1≤ ≤i m, r j

(

− ≤i

) (

r jm

)

, by (4.2), Karamata’s theorem and the property of the slowly varying function, we obtain

(

)

(

)

( )

( ) ( )

( )

2 2

1

1 2 1 1 2 1

2

1 2 1 1

1 2 2

1

1 1 1 1

2 2

2 2

2 4

1 1

,

1

( ) ( )

1 for 0

4 4

m m n m m n

i j

i j m i j m

m n m n

m m n n

i j m k

m n m n

F Cov X X r j i

m

r j m r k

m m L n

L n n

n L m

L m L n m n

m n m

n m n

α α

α α

α α γ

σ σ σ σ

σ σ σ σ

α γ

+ +

== = + = = +

+

= = + =

− − −

 

= −

 

 

≤ −

 

 

 

 

   

      < < <

     

     

∑ ∑

∑ ∑

 

 

.

By

σ

n = Var

( )

Sn , then we get

2

1

m

m S E

σ

 

=

 

  , by the stationary of the sequence

(

X X, n n

)

∈, we get

2 2

d n m n m

S+S =S , thus E S

(

n+2mSn

)

2 =E S

( )

2m 2 and by the fact that EX Xi j>0 for all i j, and (4.2), we deduce that

2 2

2 2 2 2

2

1 2

1 1

2 2

2 ( )

for some 1

( )

m n m n m m n m m

m n n m n

m

n

S S S S S S S

F E E E E

m L m m m

n L n n n

α α γ

σ σ σ σ σ

σ γ

σ

+ +

− −

   

− −

≤ − ≤

   

 

      <

 

     

       

   

,

where the second inequality follows from Jensen inequality. For F3, we have

2 2

3 0

m n n m m

m n n

S S S S

F E E

σ σ + σ

   − 

= −  =

    , so we prove (2.3) for some 0< <α 1. Next, we prove (2.3) for some α>1.

, 1 2 3 4

, m , , n :

m m m n n

m n

S S

Cov I M u y I M u y G G G G

σ

σ

    

≤ ≤ ≤ ≤ = + + +

    

 

  ,

where G1 - G4 are defined as E1 - E4 in (4.11), but for (1.2) holds for α>1. Similarly as the proof of

1

E - E4, it is easy to check that G1 G2 G3 m n

γ  

+ +  

 

 for some

γ

>0.

Define G4=:H1+H2+H3 as in (4.12), by Karamata’s theorem, we get

(

)

( )

( )

( )

2 2 1

1 1 1 1 1

1 2 1 1 1

2 2 2 2

1

2 2

1 1 1

1

2 2

1 1

1 1

for 0 2

m m n m m n

i j m i t m

m n

t m

H r j i r t

m n m n

m m m

r t L m

n m n

m n

γ

α γ

+ + −

= = + = = +

+

− = +

≤ −

    < <

   

   

∑ ∑

∑ ∑

  

.

By the definition of σn and Karamata’s theorem, it is easy to obtain that

(

) ( )

( )

( )

( )

( )

1 1 1 1

2

1 1 1 1 1

2 2 α 2 2 α 2 α

σ − − − − ∞

= = = = =

n = +

n − = +

n

n × +

n +

+

t t t t t

L t L t L t

n n n t r t n n t r t n n n n n cn

(10)

then we have

( )

1 2

n O n

σ = . Similarly as the proof of F2,

1 2 2

2

m

n

m m

H

n n

γ σ

σ

   

   

   

   for 0 1

2

γ

< < ,

by the stationary of the sequence

(

X X, n n

)

, it is easy to see that 2 2

3 0

m n n m m

m n n

S S S S

H E E

σ σ + σ

   − 

= −  =

    .

So we have proved (2.3) for the case of α>1. Finally we should prove (2.3) for α=1.

, 1 2 3 4

, m , , n :

m m m n n

m n

S S

Cov I M u y I M u y I I I I

σ

σ

    

≤ ≤ ≤ ≤ = + + +

    

 

  ,

where we replaced E1 - E4 by I1 - I4 as in (4.11), but for (1.2) holds for some α =1.

Similarly, it is easy to check that I1 I2 I3 m n

γ  

+ +  

 

 for some

γ

>0. Define I4=:J1+J2+J3. Since

( )

r t is monotonic decreasing, define

( )

( )

1

1

ˆ 1 2n

t

L n r t

=

= +

, by the application of the proposition (Potter’s TH)

of the slowly varying function, for any 0 1 1 2

δ

< < , we have

( )

( )

(

)

( )

(

)

(

( )

)

(

)

(

)

( )

(

)

(

)

(

)

( )

(

)

1 2

1 1

2 1 1

1 1 1 1 1

1 1 2 2

2 2

1 1

1 1

2 2

2 2

1 1

2 2

1 1

2 2

ˆ

ˆ ˆ

ˆ ˆ

ˆ ˆ

2 1

for some 0

m n m n

t m t

m n m n

mL n m

m m

J r t r t

L m L n m n

L n m L n m

m m n m n m

n n m n

L m L n

m n m m n m

n m n m n

δ δ

δ δ γ

σ σ σ σ

γ

+ + + +

= + =

+

+ + + +

       

=           

+

          < <

         

         

  

  

2 .

( )

( )

1

1

1 1

2

2 2

2

2 1

2 ˆ ˆ

ˆ ˆ

m

n

L m

m m m m

J

n n n n

L n

δ γ

σ σ

       

       

       

    for some 0 1

2

γ

< < .

Similarly as F3 =0 when 0< <α 1, we have J3 =0, So we have I4 m n γ      

 . Finally, we get that

1 2 3 4

m

I I I I

n γ  

+ + +  

 

 for some

γ

>0, when α=1.

References

[1] Dudzinski, M. (2008) The Almost Sure Central Limit Theorems in The Joint Version for The Maxima and Sums of Certain Stationary Gaussian Sequence. Statistics & Probability Letters, 78, 347-357.

http://dx.doi.org/10.1016/j.spl.2007.07.007

[2] Brosamler, G.A. (1988) An Almost Everywhere Central Limit Theorem. Mathematical Proceedings of the Cambridge Philosophical Society, 104, 561-574. http://dx.doi.org/10.1017/S0305004100065750

[3] Schatte, P. (1988) On Strong Versions of the Central Limit Theorem. Mathematische Nachrichten, 137, 249-256.

http://dx.doi.org/10.1002/mana.19881370117

[4] Fahrner, I. and Stadtmuller, U. (1998) On Almost Sure Central Max-Limit Theorems. Statistics & Probability Letters,

(11)

[5] Cheng, H., Peng, L. and Qi, Y.C. (1998) Almost Sure Convergence in Extreme Value Theory. Mathematische Nach-richten, 190, 43-50. http://dx.doi.org/10.1002/mana.19981900104

[6] Chandrasekharan, K. and Minakshisundaram, S. (1952) Typical Means. Oxford University Press, Oxford.

[7] Leadbetter, M.R., Lindgren, G. and Rootzen, H. (1983) Extremes and Related Properties of Random Sequences and Processes. Springer, New York. http://dx.doi.org/10.1007/978-1-4612-5449-2

[8] Csaki, E. and Gonchigdanzan, K. (2002) Almost Sure Limit Theorems for The Maximum of Stationary Gaussian Se-quences. Statistics & Probability Letters, 58, 195-203. http://dx.doi.org/10.1016/S0167-7152(02)00128-1

References

Related documents

History of previous Cesarean sections (CSs): In the present study previous cesarean sections were reported in 78 (84.8%) women and there was a highly significant

During the formation of the pancreas, Nr5a2 is necessary for the expansion of the nascent pancreatic epithelium, for the subsequent formation of the multipotent progenitor cell

DISCUSSION: The immunomodulatory activity of Averon®, an alcoholic extract of Aloe spp, was explored by evaluating their effects on cyclophosphamide induced

Antibody labeling revealed that Sdk protein was present in the axons and concentrated in the growth cones of developing R1-R6 photoreceptors, which synapse with lamina neurons

The sense of the work process of the PSF is the family as a unit of care for the achievement of goals, not the definition of family in the context of

Ten different well water and ten different borehole water samples and two control samples for each water type, were collected from five different locations with

 Development of a new drug mandates that meaningful and reliable analytical data be generated at various steps of the new drug development. Ensuring the safety

A diffused prolonged pattern of FAdE expression and delayed regression of the postnatal fetal cortex (X- zone) were detected in both the SUMOylation-deficient- Sf1 2KR/2KR and