http://www.scirp.org/journal/am ISSN Online: 2152-7393 ISSN Print: 2152-7385

**Spectral Density Estimation of Continuous Time **

**Series **

**Ahmed Elhassanein1,2 **

1_{Department of Mathematics, Faculty of Sciences and Arts, Bisha University, Bisha, Kingdom of Saudi Arabia }
2_{Department of Mathematics, Faculty of Science, Damanhour University, Damanhour, Egypt }

**Abstract **

This paper studies spectral density estimation of a strictly stationary r-vector valued continuous time series including missing observations. The finite Fourier transform is constructed in L-joint segments of observations. The modified periodogram is de-fined and smoothed to estimate the spectral density matrix. We explore the proper-ties of the proposed estimator. Asymptotic distribution is discussed.

**Keywords **

Joint Segments of Observations, Modified Periodograms, Spectral Density Matrix, Wishart Matrix

**1. Introduction **

Although spectral analysis is one of the oldest tools for time series analysis, it is still one of the most widely used analysis techniques in many branches of sciences, [1]-[6]. For zero mean r-vector valued strictly stationary time series, the spectral estimation has

been studied, [7]-[17]. Time series with missing observations frequantly appear in

paractice. If a block of observations is periodically unobtainable, Jones [18] provides a development for spectral estimation of a stationary time series. The theory of

amplitude-modulated stationary processes has been developed by Parzen [19] and

applied to periodic missing observations problems [20]. The case where an observation is made or not according to the out come of a Bernoulli trial has been discussed by

Scheinok [21]. Bloomfield [22] considered the case where a more general random

mechanism is involved. Broersen et al. [23] and [24] developed models for time series with missing observation and discussed their use for spectral estimation. Unbiased spectral estimators have been formulated assuming wavelet models of stationary time

How to cite this paper: Elhassanein, A. (2016) Spectral Density Estimation of Con-tinuous Time Series. AppliedMathematics, 7, 2140-2148.

http://dx.doi.org/10.4236/am.2016.717170 Received: September 2, 2016

Accepted: November 14, 2016 Published: November 17, 2016 Copyright © 2016 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

series by [25]. Their asymptotic properties have been also investigated.

In this paper, we will discuss the spectral analysis of a strictly stationary r-vector va-lued continuous time series with randomly missing observations in joint segments of observations. The paper is organized as follows. Section 2 introduces the basic defini-tions and assumpdefini-tions. The modified series is defined in Section 3. Section 4 considers the expanded finite Fourier transform and its properties. The modified periodogram, the spectral density estimator and its properties are given in Section 5.

**2. Observed Series **

Let *X t*

### ( )(

*t*∈

*R*

### )

be a zero mean r-vector valued strictly stationary time series with### (

### ) ( )

### {

### }

_{XX}### ( ) (

, ,### )

,*E X t*+*u X*′ *t* =*C* *u* *t u*∈*R* (2.1)

and

### ( )

d ,*XX*

*C* *u* *u*

∞

−∞

< ∞

### ∫

(2.2)where *CXX*

### ( )

*u*denotes the matrix of absolute values, the bar denotes the complex

conjugate and '*'*' denotes the matrix transpose. We may then define *f _{XX}*

### ( )

λ the*r r*×

matrix of second order spectral densities by

### ( ) ( )

1### ( )

### (

### ) (

### )

2π exp d , .

*XX* *XX*

*f* λ *C* *u* *i u*λ *u* λ *R*

∞ −

−∞

=

_{∫}

− ∈ _{ (2.3) }

Using the assumed stationary, we then set down

Assumption I. *X t*

### ( )

is a strictly stationary continuous series all of whose momentsexist. For each *j*=1, 2,,*k*−1 and any k-tuple *a a*1, 2,,*ak* we have

### (

### )

1 1

, ,*k* 1, , 1 , 2, 3,

*k*
*R*

*j* *a* *a* *k*

*u C* *u* *u* *k*

−

− < ∞ =

### ∫

(2.4)where

### (

### )

### {

### (

### )

### (

### )

### ( )

### }

1, ,*k* 1, 2, , 1 1 1 , 2 2 , , *k* ,

*a* *a* *k* *a* *a* *a*

*C* _{} *u u* *u*_{−} =*cum X* *t*+*u* *X* *t*+*u* *X* *t* (2.5)

(*a a*1, 2,,*ak* =1, 2,,*r*; *u u*1, 2,,*uk*−1,*t*∈*R k*; =2,).

Because cumulants are measures of the joint dependence of random variables, (2.4) is seen to be a form of mixing or asymptotic independence requirement for values of

### ( )

*X t* well separated in time. If *X t*

### ( )

satisfies Assumption I we may define itscumulant spectral densities by

### (

### )

### ( )

### (

### )

1

1 1

, , 1 1

1 1

, , 1 1 1 1

1

, ,

2π , , exp d d ,

*k*

*k*
*k*

*R*

*a* *a* *k*

*k*
*k*

*a* *a* *k* *j* *j* *k*

*j*

*f*

*C* *u* *u* *i* *u* *u* *u*

λ λ

λ

−

−

− − +

− −

=

= × _{}− _{}

### ∑

### ∫

(2.6)

(−∞ <λ*j*< ∞,*a a*1, 2,,*ak* =1, 2,, ;*r k*=2,). If *k*=2 the cross-spectra *fa a*1 2

### ( )

λare collected together in the matrix *fXX*

### ( )

λ of (2.3).Assumption II. Let *a*( )*T*

### ( )

*a*,

### [

0,### )

*t*

*h* *t* *h* *t* *T*
*T*

= _{ } ∈

is bounded, is of bounded variation

**3. Modified Series **

Let *D t*

### ( )

=### {

*Da*

### ( )

*t*,

*t*∈

*R*

### }

*a*=1,2,,

*r*be a process independent of

*X t*

### ( )

such that, forevery t

### ( )

### {

*1*

_{a}### }

,### {

_{a}### ( )

0### }

,*P D* *t* = = *p P D* *t* = =*q*

note that

### ( )

### {

_{a}### }

.*E D* *t* = *p*

The success of recording an observation not depend on the fail of another and so it is independent. We may then define the modified series

### ( )

### ( ) ( )

,*Y t* =*D t X t*

with components,

### ( )

### ( ) ( )

,*a* *a* *a*

*Y t* =*D* *t X* *t*

where

### ( )

1 if### ( )

_{( )}

is observed
0 if is missed

*a*
*a*

*a*

*X* *t*
*D* *t*

*X* *t*

=

**4. Expanded Finite Fourier Transform in **

**L**

**L**

**-Joint Segments of **

**Observations **

In the case when there are some randomly missing observations, Elhassanein [17]

constructed the expanded finite Fourier transform on disjoint segments of observations.
In this section the expanded finite Fourier transform is constructed in L-joint segments
of observations for a strictly stationary r-vector valued time series. Expression for its
mean, variance and cumulant will be derived. The results introduced here may be
regarded as a generalization to [13] and [17]. Let *X t*

### ( )

### (

*t*∈

### (

0,*T*

### )

### )

be an observedstretch of data with some randomly missing observations. Let *T* =*L N*

### (

−*M*

### )

+*M*,

where L is the number of joint segments and N is the length of each segment and M is
the length of joint parts, 0≤*M* <*N*, where *M*=0 we get the results in [17]. The
expanded finite Fourier transform of a given stretch of data, is defined by

( )

_{( )}

( ) ( )( )

( )

_{(}

_{(}

_{)}

_{)}

( ) ( )( )

( )

( )

_{(}

_{(}

_{)}

_{)}

_{(}

_{) ( )}

1

1 _{2} 2

1

2π d

exp d ,

*l* *N M* *M*

*l N M* *N*

*Y*

*l N M*
*l* *N M* *M*

*l N M*
*l N M*

*d* *h* *t* *l N* *M* *t*

*h* *t* *l N* *M* *i t Y t* *t*

λ

λ

− + − +

−

−

+ − + −

−

=_{} _{} − − _{} _{}

× − − −

### ∫

### ∫

(4.1)

where −∞ < < ∞ =λ ,*l* 0,1,,*L*−1, and *h t*

### ( )

is the data window satisfies Assump-tion II.

Theorem 4.1. Let *X t*

### ( )

### (

*t*∈

### (

0,*T*

### )

### )

be a strictly stationary r-vector valued timeseries with mean zero, and satisfy Assumption I. Let *l N M*( )

### ( )

*a*

*d* − λ be defined as (3.1),

( )

_{( )}

### {

_{a}l N M### }

0*E d* − λ = (4.2)

( )

_{( )}

( )_{(}

_{)}

### {

### }

### (

### ) (

### )

### (

### )

### ( )

### (

### )

( )_{(}

_{)}

### (

### ) (

### )

### (

### )

### ( )

( )_{(}

_{)}

1 2
2
1 2 1 1 2

2

1 2 1 2

,

exp exp , d

exp , d

*l N M* *l N M*

*a* *b*
*N*
*N*
*ab* *ab*
*N*
*N*
*ab* *ab*
*R*

*Cov d* *d*

*p* *i* *l N* *M* *C* *u* *i u H* *u* *u*

*p* *i* *l N* *M* *f*

λ λ

λ λ λ λ λ

λ λ ν λ ν λ ν ν

− −

−

−

= − − − − −

= − − − Φ − −

### ∫

### ∫

(4.3) where ( )_{(}

_{) ( )}

### (

( )_{( )}

### )

### (

( )_{( )}

### )

( )_{(}

_{)}

( )_{( )}

_{(}

_{(}

_{)}

_{)}

1
2
2 2
1
1 2 1 2 1 2

0 0

1 2

0

, 2π d d

exp d ,

*N N*

*N* *N* *N*

*ab* *a* *b*

*N*

*N* *N*

*a* *b*

*H* *u* *h* *t* *h* *t* *t t*

*h* *u* *t h* *t* *it* *t*

λ λ
λ λ
−
−
− = _{} _{}
× + − −

### ∫∫

### ∫

and ( )_{(}

_{) ( )}

### (

( )_{( )}

### )

### (

( )_{( )}

### )

( )_{( )}

( )_{( )}

1
2
2 2
1
1 2 1 2 1 2

0 0

1 2

, 2π d d

*N N*

*N* *N* *N*

*ab* *a* *b*

*N M* *N M*

*a* *b*

*h* *t* *h* *t* *t t*

*H* *H*
λ λ
λ λ
−
−
− −

Φ = _{} _{}

×

### ∫∫

where ( )_{( )}

( )_{( )}

_{(}

_{)}

0
exp d
*N*

*N*

*N*

*a*

*a*

*H* λ =

### ∫

*h*

*t*−

*i t*λ

*t*

for λ1=λ2 =λ,*a*=*b* then
( )

( )

_{( )}

### {

### }

### (

### )

( )_{( )}

d ,

*l N M* *N*

*a* *aa* *aa*

*Var d* λ *p* *f* λ ν ν ν

∞ −

−∞

=

_{∫}

− Φ _{ (4.4) }

( )

_{( )}

( )_{( )}

### {

### }

### ( )

### (

( )### ( )

### )

_{(}

_{)}

( )
1 1

1 2 1

1
1
2
2
1 _{2}
2

1 2 1

1 1 0

, ,

2π d , , ,

*k*
*k*

*j* *k* *k*

*l N M* *l* *N M*

*a* *a* *k*

*k*
*N*
*k* *k*
*k*
*N* *N*
*k*

*a* *j* *j* *a a* *a* *k* *a* *a* *j*

*j*
*j*

*Cum d* *d*

*p* *h* *t* *t* *f* *G* *O N*

λ λ

λ λ λ λ

− −
−
−
−
−
=
=
= + _{} _{}

### ∏∫

### ∑

(4.5)where 2

*k*

*O N*_{} − _{}

is uniform in λ λ1, 2,,λ*k*−1 as *N*→ ∞, *k*=2, and

( ) ( )

### ( )

_{(}

_{)}

1

1 0

exp d , 0, , ,

*k*

*N* *k*

*N* *N*

*a* *a* *aj* *j*

*j*

*G* *h* *t* *i t*λ *t* λ λ *t* *R*

=
= _{} _{} − ≠ ∈

### ∏

### ∫

Proof. We will prove (4.5), by (4.1) we get

( ) ( )

_{( )}

(( ))_{( )}

### {

### }

### ( )

### (

( )### ( )

### )

( )### ( )

### (

### )

1 1 2 1 1 2 2 2 110 0 0 1

, , , 1 1 1

, ,

2π d exp

, , d d ,

*k*

*j* *j*

*k*

*l N M* *l N M*

*k*

*a* *a*

*N* *N* *N*

*k* *k* *k*

*k*

*N* *N*

*k*

*j* *j* *j* *j j*

*a* *a*

*j*

*j* *j*

*a a* *a* *k* *k* *k* *k*

*Cum d* *d*

*p* *h* *t* *t* *h* *t* *i* *t*
*C* *t* *t* *t* *t* *t* *t*

let *t _{j}*− =

*t*

_{k}*u t*,

_{j}*=*

_{k}*t j*, =1, 2,,

*k*−1, and since ( )

### (

### )

( )_{( )}

_{(}

_{)}

( )### ( )

_{(}

_{)}

1 1
1
1
1 1
0 0
exp d exp d

*k*

*N* *k* *N* *k* *k*

*N* *N* *N* *k*

*aj* *j* *a* *aj* *j* *j*

*j*

*j* *j*

*h* *u* *t* *h* *t* *i t*λ *t* *h* *t* *i t*λ *t* *A* *C* *u*

− − − = = = + − − − ≤

### ∏

### ∏

### ∑

### ∫

### ∫

for some constants *A C*, and

### (

*t u*,

_{j}*,λ∈*

_{j}*R j*, =1,,

*k*

### )

, we get( ) ( )

_{( )}

(( ))_{( )}

### {

### }

### ( )

### (

( )### ( )

### )

( )### ( )

### (

### )

1 1 2 1 1 2 2 2 110 0 1

1

, , , 1 1 1 1

1

, ,

2π d exp d

, , exp d d ,

*k*

*j* *j*

*k*

*l N M* *l N M*

*a* *a* *k*

*N* *N*

*k* *k* *k*

*k*

*N* *N*

*k*

*a* *j* *j* *a* *j* *j*

*j*

*j* *j*

*N* *N* *k*

*a a* *a* *k* *j* *j* *k* *T*

*j*

*N* *N*

*Cum d* *d*

*p* *h* *t* *t* *h* *t* *it* *t*

*C* *u* *u* *i* *u* *u* *u*

λ λ
λ
λ ε
− −
−
=
= =
−
− −
=
− −
= _{} _{} _{} _{} _{}− _{}
× _{}− _{} +

### ∑

### ∏

### ∫

### ∫

### ∏

### ∑

### ∫ ∫

where### ( )

### (

( )### ( )

### )

( ) ( ) 1 2

### (

### )

1

1 _{2} _{2}

2

1 0

1 1 1

1

, , , 1 1 1 1

1

1 1

2π d

, , d d

*j*
*k*
*N*
*k*
*k*
*N*
*k*

*T* *a* *j* *j*

*j*

*N* *N* *k*

*k*

*j* *a a* *a* *k* *k*

*j*

*N* *N*

*p* *h* *t* *t*

*A* *C* *u* *C* *u* *u* *u* *u*

ε
−
−
=
− − −
−
− −
=
− − − −
≤ _{} _{}
× _{} _{}

### ∏ ∫

### ∑

### ∫

### ∫

since ( )

### ( )

*j*

*N*

*a* *j*

*h* *t* satisfy Assumption II for *j*=1,,*k* then

( )

### ( )

### (

### )

_{2}1

### (

### ( )

### )

2

10 10

d ~ d

*j*

*N*

*k* *k*

*N* *k*

*a* *j* *j* *j* *j*

*j* *j*

*h* *t* *t* *T* *h u* *u*

= =

### ∏

### ∫

### ∏

### ∫

which implies to 2

*k*

*T* *O T*

ε = _{} − _{}

, using (2.6) the proof is completed.

**5. Estimation **

Using expanded finite Fourier transform (4.1), we construct the modified periodogram as ( )

_{( )}

( )
( )( )
( )_{( )}

( )_{( )}

( )_{( )}

( )_{( )}

1
1
2
2π d ,

*l* *N M* *M*

*l N M* *N* *N* *l N M* *l N M*

*ab* *a* *b* *a* *b*

*l N M*

*I* λ *p* *h* *t h* *t* *t* α λ α λ

−
+ − +
− − −
−
′
=_{} _{}

### ∫

(5.1)such that ( )

_{( )}

( )
( )( )
( )_{( )}

(( ))_{( )}

1
2
2π d ,

*l* *N M* *M*

*l N M*

*N M* *N*

*a* *a* *a*

*l N M*

*h* *t* *td*

α − λ + − + − λ

−

=

_{∫}

_{}

_{}

where the bar denotes the complex conjugate. The smoothed spectral density estimate is constructed as

( )

_{( )}

( )_{( )}

0

1

d , , 1, 2, ,

*L*

*T* *l N M*

*ab* *ab*

*f* *I* *u a b* *r*
*L*

λ _{=} − λ _{=}

### ∫

(5.2)Theorem 5.1. Let *X t*

### ( )(

*t*∈

*R*

### )

be a strictly stationary r-vector valued continuoustime series with mean zero, and satisfy Assumption I. Let ( )

### ( )

### {

( )### ( )

### }

, 1,2, ,

*T* *T*

*YY* *ab*

*a b* *r*

*I* λ *I* λ

=

=

( )

_{( )}

### {

*l N M*

### }

### ( )

### ( )

1_{,}

_{1}

*ab* *ab*

*E I* − λ = *f* λ +*O N*− *p*→ (5.3)

( )

_{( )}

( )_{( )}

### {

### }

### (

( )_{( )}

( )_{( )}

### )

( )

_{(}

_{)}

( )_{(}

_{)}

_{( )}

_{( )}

( )

_{(}

_{)}

( )_{(}

_{)}

_{( )}

_{( )}

### ( )

( )_{( )}

_{(}

_{)}

### ( )

1 1 2 2 1 1 2 2 1 1 2 2

1 2 1 2 1 2 1 2 1 2 1 2

1 2 1 2 1 2 1 2 1 2 1 2 1 1 2 2 1 1 2 2 1 1 2 2

1

1 2

1 2 1 2 1 1

1 2 1 2 1 1

1

1 1 2

, 0 0

2π 0 , ,

*l N M* *l N M* *N* *N*

*a b* *a b* *a b* *a b* *a b* *a b*

*N* *N*

*a a* *b b* *a a* *b b* *a a* *b b*

*N* *N*

*a b* *b a* *a b* *b a* *a b* *b a*

*N*

*a b a b* *a b a b* *a b a b*

*Cov I* *I* *G* *G*

*G* *G* *f* *f*

*G* *G* *f* *f*

*G* *f* *O N*

λ λ

λ λ λ λ λ λ

λ λ λ λ λ λ

λ λ λ

−

− −

−

= Φ Φ

×_{} Φ − Φ − −

+ Φ − Φ + −

+ Φ − _{}+

(5.4) ( )

_{( )}

( )_{( )}

### {

### }

( )_{( )}

_{(}

_{)}

### (

### )

( )### (

### )

### ( )

### ( )

11 1 1
1
1
1 1
1
1
, ,
0 exp
*k*
*k k*

*i i* *i i* *j j*

*j* *j*
*j* *j*

*l N M* *l* *N M*

*k*

*a b* *a b*

*k* *k* *k*

*N*

*a b* *a b* *a b* *k* *j* *j*

*j*

*i* *j*

*k*
*N*

*j* *j* *c d* *j*

*c d*

*j*

*Cum I* *I*

*G* *G* *il* *N* *M*

*f* *O N*

λ λ

µ γ

µ γ µ

− − − = = = − =

=_{} Φ _{} _{} _{}− − + _{}

× Φ + _{} _{}+

### ∑

### ∑

### ∏

### ∏

### ∏

(5.5)where the summation extends over all partitions

### (

### ) (

### )

### {

*c*1,µ1 ,

*d*1,γ1

### }

,,### {

### (

*ck*,µ

*k*

### ) (

,*dk*,γ

*k*

### )

### }

, into pairs of the quantities### (

*a*1,λ1

### ) (

,*b*1,−λ1

### )

,,### (

*ak*,λ

*k*

### ) (

,*bk*,−λ

*k*

### )

excluding the case with µ*j*= − =γ

*j*λ

*m*for

some *j m*, , where _{O N}

### ( )

−1 is uniform in1, , *k*

λ λ . Proof. By (5.1), we have

( )

_{( )}

### {

### }

### (

( )_{( )}

### )

## {

( )_{( )}

( )_{( )}

## }

( )_{( )}

( )_{( )}

### {

### }

1 2 0 ,*l N M* *N* *l N M* *l N M*

*ab* *ab* *ab* *a* *b*

*l N M* *l N M*

*a* *b*

*E I* *p G* *E d* *d*

*Cov d* *d*

λ λ λ

λ λ − − − − − − ′ = Φ =

then by (4.3) the proof of (5.3) is completed. From (5.1), and by Theorem (2.3.2) in [10]

p. 21, we have

( )

_{( )}

( )_{( )}

### {

### }

( )_{( )}

( )_{( )}

( )_{( )}

( )_{(}

_{)}

### {

### }

( )_{( )}

( )_{( )}

( )_{( )}

( )_{(}

_{)}

### {

### }

( )_{( )}

( )_{( )}

### {

### }

### {

( )_{( )}

( )_{(}

_{)}

### }

( )_{( )}

( )_{(}

_{)}

### {

### }

( )_{( )}

1 1 2 2

1 1 2 2

1 1 2 2

1 2 1 2

1 2 1

1 2

1 1 2 2

1 1 2 2

1 2 1 2

1 2 1

,

,

, , ,

, ,

,

*l N M* *l N M*

*a b* *a b*

*l N M* *l N M* *l N M* *l N M*

*a* *b* *a* *b*

*l N M* *l N M* *l N M* *l N M*

*a* *b* *a* *b*

*l N M* *l N M* *l N M* *l N M*

*a* *a* *b* *b*

*l N M* *l N M* *l N M*

*a* *b* *b*

*Cov I* *I*

*Cov d* *d* *d* *d*

*Cum d* *d* *d* *d*

*Cov d* *d* *Cov d* *d*
*Cov d* *d* *Cov d*

λ λ

λ λ λ λ

λ λ λ λ

λ λ λ λ

λ λ λ

− − − − − − − − − − − − − − − − − ′ ′ = − − = − − + − − + −

### {

− ( )### ( )

### }

2 2,*dal N M* λ .

−

By Theorem (4.1) the proof of (5.4) is completed. From (5.1), we have

( )

_{( )}

( )_{( )}

### {

### }

( )_{( )}

( )_{( )}

( )_{( )}

( )_{( )}

( )_{(}

_{)}

### {

### }

1 1 1 1 1 1 1 1 2 1 1 1 1, , 0

, ,

*k*

*k k* *i i* *i i*

*k* *k*

*k* *k*

*k*

*l N M* *l* *N M* *k* *N*

*a b* *a b* *k* *a b* *a b*

*i*

*l N M* *l N M* *l* *N M* *l* *N M*

*a* *b* *a* *k* *b* *k*

*Cum I* *I* *p* *G*

*Cum d* *d* *d* *d*

λ λ

λ λ λ λ

−

− − −

=

− − − −

= _{} Φ _{}

′ ′ × − −

### ∏

By Theorem (2.3.2) in [10] p. 21, we get

( )

_{( )}

( )_{( )}

( )_{( )}

( )_{(}

_{)}

### {

### }

( )

_{( )}

### {

### }

### {

( )_{( )}

### }

1 1

1 1 1 1
1
, ,
; ; ,
*k* *k*
*k* *k*
*i* *i*

*l N M* *l N M* *l* *N M* *l* *N M*

*a* *b* *a* *k* *b* *k*

*l N M* *l N M*

*i* *i* *s*

*Cum d* *d* *d* *d*

*Cum d* *i* *Cum d* *i*

ν

λ λ λ λ

λ ν λ ν

where the summation extends over all indecomposable partitions *s* _{1} * _{j}* ,

*j* *I*

ν _{=} _{=}ν _{∈}

###

### (

1, ,*k*; ,1 ,

*k*

### )

,*I*= *a* *a b* *b* 1≤ ≤*s* *k* of the transformed table

### (

### ) (

### )

### (

### ) (

### )

### (

### ) (

### )

### (

### ) (

### )

### {

### }

### (

### ) (

### )

### {

### }

### (

### ) (

### )

### {

### }

1 1 1 1

1 1 1 1

2 2 2 2 _{2} _{2} _{2} _{2}

, , , _{,} _{,} _{,}

, , , , , ,

, , , .

, , , *k* *k* *k* *k*

*k* *k* *k* *k*

*a* *b* _{c}_{d}

*a* *b* *c* *d*

*c* *d*

*a* *b*

λ λ _{µ} _{γ}

λ λ µ γ

µ γ

λ λ

− −

→ −

Then, by Theorem (4.1), we get the proof of (5.5).

Theorem 5.2. Let *X t*

### ( )(

*t*∈

*R*

### )

be a strictly stationary r-vector valued time serieswith mean zero, and satisfy Assumption I. Let ( )

### ( )

### {

( )### ( )

### }

, 1,2, ,

*l N M* *l N M*

*YY* *ab*

*a b* *r*

*I* − λ *I* − λ

=

=

be

given by (3.6), 2λ λ λ*j*, *j*± *k* ≡/0 mod 2π

### (

### )

for 1≤ < ≤*j*

*k*

*J*and Φ

*a*

### ( )

*t*satisfy

Assumption II for *a*=1, 2,, .*r* Then *IYYl N M*( )

### ( )

λ*j*,

*j*1, 2, ,

*J*

− _{=}

are asymptotically

independent *W _{r}c*

### (

1,*f*

_{XX}### ( )

λ

_{j}### )

variates. Also if λ= ± ±π, 3π,. then*I*( − )

_{YY}l N M### ( )

λ isasymptotically *W _{r}*

### (

1,*f*

_{XX}### ( )

λ### )

independent of the previous variates. Where,*Wr*

### ( )

γ,Σdenotes an *r r*× symmetric matrix-valued Wishart variate with covariance matrix Σ

and γ degree of freedom and *Wrc*

### (

γ,Σ### )

denotes an*r r*× Hermitian matrix-valued complex Wishart variate with covariance matrix Σ

_{ and }γ

_{ degree of freedom. }

Proof. The proof comes directly from Theorem (4.2), for more details about Wishart

distribution see [26].

Theorem 5.3. Let *X t*

### ( )(

*t*∈

*R*

### )

be a strictly stationary r-vector valued time series withmean zero, and satisfy Assumption I. Let ( )*T*

### ( )

*ab*

*f* λ be given by (3.7), *a b*, =1, 2,, ,*r*

then

( )

_{( )}

### {

*T*

### }

### ( )

### ( )

1*ab* *ab*

*E f* λ = *f* λ +*O N*− (5.6)

( )

_{( )}

( )_{( )}

### {

### }

( )

_{( )}

( ) _{( )}

### (

### )

### (

### (

### )

### (

### )

### )

### (

### )

### (

### )

( )_{(}

_{)}

( )_{(}

_{)}

### (

### )(

### )

### (

### )

### ( )

### ( )

### (

### )

### (

### )

( )_{(}

_{)}

( )_{(}

_{)}

### (

### )

1 1 2 2

1 1 2 2 1 1 2 2

1 2 1 2 1 2 1 2

1 2 1 2 1 2 1 2 1 2 1 2

1 2

1 1

2

1 1 2 2

0 0

1 2 1 2 1 2 1 2

2 1 2 1 1

1 2 1 2 1 2 1 2

2 1

,

0 0 , ,

, ,

exp

, ,

exp

*T* *T*

*a b* *a b*

*L L*

*N* *N*

*a b* *a b*

*a b* *a b*

*N* *N*

*a a* *b b* *a a* *b b*

*a a* *b b*

*N* *N*

*a b* *b a* *a b* *b a*

*Cov f* *f*

*L* *G* *l l G* *l l*

*G* *l l* *G* *l l*

*il* *N* *M* *f* *f*
*G* *l l* *G* *l l*

*il* *N* *M*

λ λ

λ λ λ λ

λ λ λ λ

λ λ λ λ

λ

− −

= Φ Φ

×_{} Φ − Φ −

× − − − −

+ Φ + Φ +

× − −

### ∫ ∫

### (

### )

### (

### )

### ( )

### ( )

### ( )

### (

### )

( )_{( )}

_{(}

_{)}

### ( )

1 2 1 2 1 1 2 2 1 1 2 2 1 1 2 2

2 1 1

1

1 1 2 2 1 1 2 1 2

2π , , , 0 , , d d

*a b* *b a*
*N*

*a b a b* *a b a b* *a b a b*

*f* *f*

*G* *l l l l* *f* *u u* *O N*

λ λ λ

λ λ λ −

+ −

+ Φ − _{} +

(5.7)

Proof. By (5.2), we have

( )

_{( )}

### {

### }

### {

( )_{( )}

### }

0

1*L*

*T* *l N M*

*ab* *ab*

*E f* *E I*
*L*

λ _{=} − λ

### ∫

then by (5.3) the proof of (5.6) is completed. From (5.2), we get ( )

_{( )}

( )_{( )}

### {

### }

### {

( )_{( )}

( )_{( )}

### }

1 1 1 2 2 2 2 1 1 1 2 2 2 1 2 0 0

1

, , d d .

*L L*

*T* *T* *l N M* *l N M*

*a b* *a b* *a b* *a b*

*Cov f* *f* *Cov I* *I* *u u*
*L*

λ λ _{=} − λ − λ

which completes the proof of (5.7).
Theorem 5.4. Let *X t*

### ( )(

*t*∈

*R*

### )

be a strictly stationary r-vector valued time series with mean zero, and satisfy Assumption I. Let*l N M*( )

_{( )}

*ab*

*f* − λ be given by (5.2),

, 1, 2, ,

*a b*= *r*, 2λ λ* _{j}*,

*±λ*

_{j}*≡/0 mod 2π*

_{k}### (

### )

for 1≤ < ≤*j*

*k*

*J*, Then

( )

### ( )

, 1, 2, ,
*l N M*

*ab* *j*

*Lf* − λ *j*= *J* are asymptotically independent *c*

### (

,### ( )

### )

*r* *ab* *j*

*W* *L f* λ variates.

Also if λ= ± ±π, 3π,. then *l N M*( )

### ( )

*ab*

*Lf* − λ is asymptotically *W _{r}*

### (

*L f*,

_{ab}### ( )

λ### )

indepen-dent of the previous variates.

Proof. The proof comes directly by Theorem (5.3) and Theorem (7.3.2) in [26] p.

162.

**References **

[1] Olafsdttira, K.B., Schulz, M. and Mudelsee, M. (2016) REDFIT-X: Cross-Spectral Analysis of Unevenly Spaced Paleoclimate Time Series. Computers & Geosciences, 91, 11-18.

http://dx.doi.org/10.1016/j.cageo.2016.03.001

[2] Fong, S., Cho, K., Mohammed, O., Fiaidhi, J. and Mohammed, S. (2016) A Time Series Pre- Processing Methodology with Statistical And Spectral Analysis for Classifying Non-Statio- nary Stochastic Biosignals. The Journal of Supercomputing, 72, 3887-3908.

http://dx.doi.org/10.1007/s11227-016-1635-9

[3] Huang, N.E., et al. (2016) On Holo-Hilbert Spectral Analysis: A Full Informational Spectral Representation for Nonlinear and Non-Stationary Data. Philosophical Transactions of the Royal Society A, 374, 20150206. http://dx.doi.org/10.1098/rsta.2015.0206

[4] Kotoku, J., Kumagai, S., Uemura, R., Nakabayashi, S. and Kobayashi, T. (2016) Automatic Anomaly Detection of Respiratory Motion Based on Singular Spectrum Analysis. Interna-tional Journal of Medical Physics, Clinical Engineering and Radiation Oncology, 5, 88-95. [5] Miller, K.J., Schalk, G., Hermes, D., Ojemann, J.G. and Rao, R.P.N. (2016) Spontaneous

Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broad- band Spectral Change. PLOS Computational Biology, 12, e1004660.

http://dx.doi.org/10.1371/journal.pcbi.1004660

[6] Kim, J., Park, S., Jeung, G. and Lee, J. (2016) Estimation of a Menstrual Cycle by Covariance Stationary-Time Series Analysis on the Basal Body Temperatures. Journal of Medical and Bioengineering, 5, 63-66. http://dx.doi.org/10.12720/jomb.5.1.63-66

[7] Brillinger, D.R. (1969) Asymptotic Properties of Spectral Estimate of Second Order. Biome-trika, 56, 375-390. http://dx.doi.org/10.1093/biomet/56.2.375

[8] Dahlhaus, R. (1985) On Spectral Density Estimate Obtained by Averaging Periodograms.

Journal of Applied Probability, 22, 598-610. http://dx.doi.org/10.1017/S0021900200029351

[9] Bloomfield, P. (2000) Fourier Analysis of Time Series an Introduction. 2ond Edition, John Wiley & Sons, Inc., Hoboken. http://dx.doi.org/10.1002/0471722235

[10] Brillinger, D.R. (2001) Time Series Data Analysis and Theory. Society for Industrial and Applied Mathematics. http://dx.doi.org/10.1137/1.9780898719246

[11] Broersen, P.M.T. (2006) Automatic Autocorrelation and Spectral Analysis. Springer-Verlag London Limited, London.

[12] Ghazal, M.A. and Elhassanein, A. (2006) Periodogram Analysis with Missing Observations.

Journal of Applied Mathematics and Computing, 22, 209-222.

[13] Ghazal, M.A. and Elhassanein, A. (2007) Nonparametric Spectral Analysis of Continuous Time Series. Bulletin of Statistics and Economics, 1, 41-52

[14] Ghazal, M.A. and Elhassanein, A. (2008) Spectral Analysis of Time Series in Joint Segments of Observations. Journal of Applied Mathematics & Informatics, 26, 933-943.

[15] Ghazal, M.A. and Elhassanein, A. (2009) Dynamics of EXPAR Models for High Frequency Data. IJAMAS, 14, 88-96.

[16] Elhassanein, A. (2011) Nonparametric Spectral Analysis on Disjoint Segments of Observa-tions. JAMSI, 7, 81-96.

[17] Elhassanein, A. (2014) On the Theory of Continuous Time Series. Indian Journal of Pure and Applied Mathematics, 45, 297-310.

[18] Jones, R.H. (1962) Spectral Analysis with Regularly Missed Observations. The Annals of Mathematical Statistics, 33, 455-461. http://dx.doi.org/10.1214/aoms/1177704572

[19] Parzen, E. (1962) Spectral Analysis of Asymptotically Stationary Time Series. Bulletin de International de Statistique, 33rd Session, Paris.

[20] Parzen, E. (1963) On Spectral Analysis with Missing Observations and Amplitude Modula-tion. Sankhya A, 25, 383-392

[21] Scheinok, P.A. (1965) Spectral Analysis with Randomly Missed Observations: The Binomial Case. The Annals of Mathematical Statistics, 36, 971-977.

http://dx.doi.org/10.1214/aoms/1177700069

[22] Bloomfield, P. (1970) Spectral Analysis with Randomly Missing Observations. Journal of the Royal Statistical Society, 32, 369-380

[23] Broersen, P.M.T., de Waele, S. and Bos, R. (2004) Autoregressive Spectral Analysis When Observations Are Missing. Automatica, 40, 1495-1504.

http://dx.doi.org/10.1016/j.automatica.2004.04.011

[24] Broersen, P.M.T. (2006) Automatic Spectral Analysis with Missing Data. Digital Signal Processing, 16, 754-766. http://dx.doi.org/10.1016/j.dsp.2006.01.001

[25] Mondal, D. and Percival, D.B. (2008) Wavelet Variance Analysis for Gappy Time Series.

Annals of the Institute of Statistical Mathematics, 62, 943-966.

[26] Anderson, T.W. (1972) An Introduction to Multivariate Statistical Analysis. Wiley Eastern Limited, New Delhi.

Submit or recommend next manuscript to SCIRP and we will provide best service for you:

Accepting pre-submission inquiries through Email, Facebook, LinkedIn, Twitter, etc. A wide selection of journals (inclusive of 9 subjects, more than 200 journals)

Providing 24-hour high-quality service User-friendly online submission system Fair and swift peer-review system

Efficient typesetting and proofreading procedure

Display of the result of downloads and visits, as well as the number of cited articles Maximum dissemination of your research work