• No results found

Maximum likelihood estimators in linear regression models with Ornstein Uhlenbeck process

N/A
N/A
Protected

Academic year: 2020

Share "Maximum likelihood estimators in linear regression models with Ornstein Uhlenbeck process"

Copied!
31
0
0

Loading.... (view fulltext now)

Full text

(1)

R E S E A R C H

Open Access

Maximum likelihood estimators in linear

regression models with Ornstein-Uhlenbeck

process

Hongchang Hu

1

, Xiong Pan

2*

and Lifeng Xu

1

*Correspondence: pxjlh@163.com 2Faculty of Information Engineering,

China University of Geosciences, Wuhan, 430074, China Full list of author information is available at the end of the article

Abstract

The paper studies the linear regression model

yt=xtT

β

+

ε

t, t= 1, 2,. . .,n,

where

d

ε

t=

λ

(

μ

ε

t)dt+

σ

dBt,

with parameters

λ

,

σ

R+,

μ

Rand{B

t,t≥0}the standard Brownian motion. Firstly,

the maximum likelihood (ML) estimators of

β

,

λ

and

σ

2are given. Secondly, under general conditions, the asymptotic properties of the ML estimators are investigated. And then, limiting distributions for likelihood ratio test statistics of the hypothesis are also given. Lastly, the validity of the method are illuminated by two real examples.

MSC: 62J05; 62M10; 60J60

Keywords: linear regression model; maximum likelihood estimator; Ornstein-Uhlenbeck process; asymptotic property; likelihood ratio test

1 Introduction

Consider the following linear regression model

yt=xTtβ+εt, t= , , . . . ,n, (.)

whereyt’s are scalar response variables,xt’s are explanatory variables,βis anm

-dimen-sional unknown parameter, and{εt}is an Ornstein-Uhlenbeck process, which satisfies the

linear stochastic differential equation (SDE)

dεt=λ(μεt)dt+σdBt (.)

with parametersλ,σR+,μRand{B

t,t≥}the standard Brownian motion.

It is well known that a linear regression model is the most important and popular model in the statistical literature, which attracts many people to investigate the model. For an

(2)

ordinary linear regression model (when the errors are independent and identically dis-tributed (i.i.d.) random variables), Wang and Zhou [], Anatolyev [], Bai and Guo [], Chen [], Gilet al.[], Hampelet al.[], Cui [], Durbin [] and Li and Yang [] used various estimation methods to obtain estimators of the unknown parameters in (.) and discussed some large or small sample properties of these estimators. Recently, linear re-gression with serially correlated errors has attracted increasing attention from statisticians and economists. One case of considerable interest is that the errors are autoregressive processes; Hu [], Wu [], and Fox and Taqqu [] established its asymptotic normal-ity with the usualpn-normalization in the case of long memory stationary Gaussian

ob-servations errors. Giraitis and Surgailis [] extended this result to non-Gaussian linear sequences. Koul and Surgailis [] established the asymptotic normality of the Whittle estimator in linear regression models with non-Gaussian long memory moving average errors. Shiohama and Taniguchi [] estimated the regression parameters in a linear re-gression model with autoregressive process. Fan [] investigated moderate deviations for M-estimators in linear models withφ-mixing errors.

The Ornstein-Uhlenbeck process was originally introduced by Ornstein and Uhlenbeck [] as a model for particle motion in a fluid. In physical sciences, the Ornstein-Uhlenbeck process is a prototype of a noisy relaxation process, whose probability density function f(x,t) can be described by the Fokker-Planck equation (see Janczuraet al.[], Debbasch et al.[], Gillespie [], Ditlevsen and Lansky [], Garbaczewski and Olkiewicz [], Plastino and Plastino []):

∂f(x,t)

∂t =

∂x

λ(xμ)f(x,t)+σ  

f(x,t)

∂x .

This process is now widely used in many areas of application. The main characteristic of the Ornstein-Uhlenbeck process is the tendency to return towards the long-term equilib-riumμ. This property, known as mean-reversion, is found in many real life processes,e.g., in commodity and energy price processes (see Fasen [], Yu [], Geman []). There are a number of papers concerned with the Ornstein-Uhlenbeck process, for example, Janczuraet al.[], Zhanget al.[], Rieder [], Iacus [], Bishwal [], Shimizu [], Zhang and Zhang [], Chronopoulou and Viens [], Lin and Wang [] and Xiaoet al. []. It is well known that the solution of model (.) is an autoregressive process. For a constant or functional or random coefficient autoregressive model, many people (for ex-ample, Magdalinos [], Andrews and Guggenberger [], Fan and Yao [], Berk [], Goldenshluger and Zeevi [], Liebscher [], Baranet al.[], Distaso [] and Harvill and Ray []) used various estimation methods to obtain estimators and discussed some asymptotic properties of these estimators, or investigated hypotheses testing.

By (.) and (.), we can obtain that the more general process satisfies the SDE

dyt=λ

L(t,λ,μ,β) –yt

dt+σdBt, (.)

(3)

and Wang [] established the existence of a successful coupling for a class of stochastic differential equations given by (.). Bishwal [] investigated the uniform rate of weak convergence of the minimum contrast estimator in the Ornstein-Uhlenbeck process (.).

The solution of model (.) is given by

εt=eλtε+μ

 –eλt+σ

t

(st)dBt, (.)

whereteλ(st)dB

tN(,–exp –λtλ ).

The process observed in discrete time is more relevant in statistics and economics. Therefore, by (.), the Ornstein-Uhlenbeck time series fort= , , . . . ,nis given by

εt=eλdεt–+μ

 –eλd+σ

 –e–λd

λ ηt, (.)

whereηtN(, ) i.i.d. random errors and with equidistant time lagd, fixed in advance.

Models (.) and (.) include many special cases such as a linear regression model with constant coefficient autoregressive processes (whenμ= ; see Hu [], Wu [], Maller [], Pere [] and Fuller []), Ornstein-Uhlenbeck time series or processes (whenβ= ; see Rieder [], Iacus [], Bishwal [], Shimizu [] and Zhang and Zhang []), constant coefficient autoregressive processes (whenμ= , β = ; see Chambers [], Hamilton [], Brockwell and Davis [] and Abadir and Lucas [],etc.).

The paper discusses models (.) and (.). The organization of the paper is as follows. In Section  some estimators ofβ,θ andσare given by the quasi-maximum likelihood method. Under general conditions, the existence and consistency of the quasi-maximum likelihood estimators as well as asymptotic normality are investigated in Section . The hypothesis testing is given in Section . Some preliminary lemmas are presented in Sec-tion . The main proofs of theorems are presented in SecSec-tion , with two real examples in Section .

2 Estimation method

Without of loss generality, we assume thatμ= ,ε=  in the sequel. Write the ‘true’ model as

yt=xTtβ+et, t= , , . . . ,n (.)

and

et=exp(–λd)et–+σ

 –exp(–λd) λ

ηt, (.)

whereηtN(, ) i.i.d.

By (.), we have

et=σ

 –exp(–λd) λ

t

j=

(4)

Thusetis measurable with respect to theσ-fieldHgenerated byη,η, . . . ,ηt, and

Eet= , Var(et) =σ

 –exp(–λd) λ

exp –λdt(t– ). (.)

Using similar arguments as those of Rieder [] or Maller [], we get the log-likelihood ofy,y, . . . ,ynconditional ony,

n

β,λ,σ=logLn

= –

(n– )log

π σλ

– 

(n– )log

 –exp(–λd)

λ

σ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

. (.)

We maximize (.) to obtain QML estimators denoted by σˆn, βˆn,λˆn(when they exist).

Then the first derivatives ofnmay be written as

∂n

∂σ = – n– 

σ +

λ

σ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

, (.)

∂n

∂λ =

n–  λ

(n– )dexp(–λd)  –exp(–λd) –

exp(–λd)

σ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

εt–

– – ( + )exp(–λd)

σ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

(.)

and

∂n

∂β =

λ σ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

xt–exp(–λd)xt–

. (.)

Thusσˆn,βˆn,λˆnsatisfy the following estimation equations:

ˆ

σn= λˆn

(n– )( –exp(–λˆnd)) n

t=

ˆ

εt–exp(–λˆnd)εˆt–

, (.)

ˆ

σn( – ( + ˆn)exp(–λˆnd))

λˆn

–ˆnexp(–λˆnd) n– 

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

ˆ

εt–

– – ( + ˆn)exp(–λˆnd) ( –exp(–λˆnd))(n– )

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

=  (.)

and

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

xt–exp(–λˆnd)xt–

= , (.)

where

ˆ

(5)

To obtain our results, the following conditions are sufficient (see Maller []).

(A) Xn= n

t=xtxTt is positive definite for sufficiently largenand

lim

n→∞≤maxtnx T

tXn–xt= . (.)

(A)

lim sup

n→∞ |˜

λ|maxX

  n ZnX

T

n

< , (.)

whereZn= n

t=(xtxTt–+xt–xTt),|˜λ|max(·)denotes the maximum in absolute value of the eigenvalues of a symmetric matrix.

For ease of exposition, we shall introduce the following notations which will be used later in the paper.

Let (m+ )-vectorθ= (β,λ). Define

Sn(θ) =σ

∂n

∂θ =σ

∂n

∂β , ∂n

∂λ

, Fn(θ) = –σ

n

∂θ ∂θT. (.)

By (.) and (.), we get the components ofFn(θ)

σ

n

∂β ∂βT =

λ

 –exp(–λd)

n

t=

xt–exp(–λd)xt–

xt–exp(–λd)xt–

T

= λ

 –exp(–λd)Xn(λ), (.)

σ

n

∂β ∂λ= –

exp(–λd)  –exp(–λd)

n

t=

εt–xt+εtxt–– exp(–λd)xt–εt–

– – ( + )exp(–λd) ( –exp(–λd)) ·

n

t=

εt–exp(–λd)εt–

xt–exp(–λd)xt–

(.)

and

σ

n

∂λ =

σ(n– ) λ –

σ(n– )dexp(–λd) ( –exp(–λd)) +

dλexp(–λd)  –exp(–λd)

n

t=

εt–

+d( –– ( +)exp(–λd))exp(–λd) ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

εt–

+dexp(–λd)[ – ( + )exp(–λd)] ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

εt–

+dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

=σ(n– )

λ –

σ(n– )dexp(–λd) ( –exp(–λd)) +

dλexp(–λd)  –exp(–λd)

n

t=

(6)

+dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

εt–

+dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd))

·

n

t=

εt–exp(–λd)εt–

. (.)

Hence we have

Fn(θ) = λ

–exp(–λd)Xn(λ) –σ

n

∂β ∂λ

∗ –σn

∂λ

, (.)

where the∗indicates that the elements are filled in by symmetry. By (.), we have

E

σ

n

∂λ

θ=θ

= (n– )σ

 λ+

dexp(–λd)[– + ( +)exp(–λd)]

λ( –exp(–λd))

+dλ

exp(–λd)  –exp(–λd)

n

t= Eet–

= (n– )σ[ – ( + )exp(–λd)]

 λ

( –exp(–λd)) +d

λ

exp(–λd)  –exp(–λd)

n

t= Eet–

=n(θ,σ) =O(n). (.)

Thus,

Dn=E

Fn(θ)

=

λ

–exp(–λd)Xn(λ)   n(θ,σ).

(.)

3 Large sample properties of the estimators

Theorem . Suppose that conditions(A)-(A)hold.Then there is a sequence An↓

such that,for each A> ,as n→ ∞,the probability

P there are estimatorsθˆn,σˆnwith Sn(θˆn) = ,and ˆ

θn,σˆn

Nn(A)→. (.)

Furthermore,

ˆ

θn,σˆn

p

θ,σ

, n→ ∞, (.)

where,for each n= , , . . . ,A> and An∈(,σ),define neighborhoods

Nn(A) = θRm+: (θθ)TDn(θθ)≤A

(7)

and

Nn(A) =Nn(A)∩ σ∈

σ–An,σ+An

. (.)

Theorem . Suppose that conditions(A)-(A)hold.Then

 ˆ

σn

F T

n (θˆn)(θˆnθ)→DN(,Im+), n→ ∞. (.) In the following, we will investigate some special cases in models (.) and (.). From Theorem . and Theorem ., we obtain the following results. Here we omit their proofs.

Corollary . Ifβ= ,then

n(θ,σ) ˆ

σn

(λˆnλ)→DN(, ), n→ ∞. (.)

Corollary . Ifβ= ,then

n(λˆnλ)→DN

,σ, n→ ∞. (.)

4 Hypothesis testing

In order to fit a data set {yt,t= , , . . . ,n}, we may use model (.) or an

Ornstein-Uhlenbeck process with a constant mean level model

dyt=λ(μyt)dt+σdBt. (.)

Ifβ= , then we use model (.), namely models (.) and (.). Ifβ= , then we use model (.). How to knowβ=  orβ= ? In the section, we shall consider the question about hypothesis testing and obtain limiting distributions for likelihood ratio (LR) test statistics (see Fan and Jiang []).

Under the null hypothesis

H: β= , λ> , σ> , (.)

letβˆn,λˆn,σˆnbe the corresponding ML estimators ofβ,λ,σ. Also let

ˆ

Ln= –n ˆ

βn,λˆn,σˆn

(.)

and

ˆ

Ln= –n ˆ

βn,λˆn,σˆn

. (.)

By (.) and (.), we have that

ˆ

Ln= (n– )log

πσˆn

ˆ

λn

+ (n– )log –exp(–λˆnd)

+ λˆn ˆ

σ

n( –exp(–λˆnd)) n

t=

ˆ

εt–exp(–λˆnd)εˆt–

(8)

= (n– )log

πσˆ

n

ˆ

λn

+ (n– )log –exp(–λˆnd)

+ (n– )

= (n– )log(π+ ) + (n– )logσˆn

+ (n– )log –exp(–λˆnd)

–log(λˆn)

. (.)

And similarly,

ˆ

Ln= (n– )log(π+ ) + (n– )log

ˆ

σn

+ (n– )log –exp(–λˆnd)

–log(λˆn)

. (.)

By (.) and (.), we have

˜

d(n) =Lˆ nLˆn

= (n– )log

ˆ

σ n

ˆ

σ

n

+ (n– )

log

 –exp(–λˆnd)

 –exp(–λˆnd)

–log

ˆ

λn

ˆ

λn

= (n– )

ˆ

σn

ˆ

σ

n

– 

+ (n– )

 –exp(–λˆnd)

 –exp(–λˆnd)

λˆn ˆ

λn

+op()

= (n– )σˆ  nσˆn

σ 

+ (n– )

 –exp(–λˆnd)

 –exp(–λˆnd)

λˆn ˆ

λn

+op()

= (n– )σˆ  nσˆn

σ +op(). (.)

Large values ofd˜(n) suggest rejection of the null hypothesis.

Theorem . Suppose that conditions(A)-(A)hold.IfHholds,then

˜

d(n)→(m), n→ ∞. (.)

5 Some lemmas

Throughout this paper, letCdenote a generic positive constant which could take differ-ent value at each occurrence. To prove our main results, we first introduce the following lemmas.

Lemma . If condition(A)holds,then for anyλR+the matrix X

n(λ)is positive definite

for large enough n,and

lim

n→∞≤maxtnx T

tXn–(λ)xt= .

Proof Letλ˜andλ˜mbe the smallest and largest roots of|Znλ˜Xn|= . Then from Ex. .

of Rao [],

˜

λ≤ uTZ

nu

uTX nu≤ ˜

(9)

for unit vectors u. Thus by (.) there are some δ∈(, ) andn(δ) such thatnN implies

uTZnu≤( –δ)uTXnu. (.)

By (.) and (.), we have

uTXn(λ)u= n

t=

uTxt–exp(–λd)xt–

n

t=

uTx t

 +min

λ exp(–λd) n

t=

uTx t–

–max

λ exp(–λd)u TZ

nu

uTXnu+min

λ exp(–λd)u TX

nuuTZnu

≥ +min

λ exp(–λd) – ( –δ)

uTXnu

=

min

λ exp(–λd) +δ

uTXnu=C(λ,δ)uTXnu. (.)

By Rao [, p.] and (.), we have

(uTxt)

uTX nu

→. (.)

From (.) andC(λ,δ) > ,

xTt Xn–(λ)xt=sup u

(uTxt)

uTX n(λ)u

≤sup

u

(uTxt)

C(λ,δ)uTX nu

→. (.)

Lemma . The matrix Dn is positive definite for large enough n, E(Sn(θ)) =  and

Var(Sn(θ)) =σDn.

Proof Note thatXn(λ) is positive definite andn(θ,σ) > . It is easy to show that the matrixDnis positive definite for large enoughn. By (.), we have

σE

∂n

∂β

θ=θ

= λ  –exp(–λd) ·

n

t=

Eet–exp(–λd)et–

xt–exp(–λd)xt–

= λ  –exp(–λd)σ

 –exp() λ ·

n

t=

xt–exp(–λd)xt–

Eηt

(10)

Note thatet–andηtare independent, so we haveE(ηtet–) = . Thus, by (.) andEηt= ,

we have

E

∂n

∂λ

θ=θ

=n–  λ

–(n– )dexp(–λd)  –exp(–λd) –  – – ( + )exp(–λd)

σ( –exp(–λd)) σ  

 –exp() λ

n

t= t

=n–  λ

–(n– )dexp(–λd)  –exp(–λd)

– – ( + )exp(–λd) λ( –exp(–λd))

(n– )

= . (.)

Hence, from (.) and (.),

ESn(θ)

=σE

∂n

∂β

θ=θ

,∂n

∂λ

θ=θ

= . (.)

By (.) and (.), we have

Var

σ∂n ∂β

θ=θ

=Var

λ  –exp(–λd)

n

t=

et–exp(–λd)et–

xt–exp(–λd)xt–

= σ  λ

 –exp(–λd)Var

n

t=

xt–exp(–λd)xt–

ηt

= σ  λ  –exp(–λd)

Xn(λ). (.)

Note that{ηtet–,Ht}is a martingale difference sequence with

Var(ηtet–) =EηtEet–=Eet–, so

Var

σ∂n ∂λ

θ=θ

=E

σdexp(–λd)

λ  –exp(–λd)

n

t=

ηtet–

+E

σ[ – ( + )exp(–)] λ( –exp(–λd))

n

t=

ηt – 

+ √

σdexp(–λd)[ – ( + )exp(–)] √

λ( –exp(–λd))

 

·E

n

t=

ηtet–

n

t=

ηt – 

=λσ

dexp(–λd)  –exp(–λd)

n

(11)

+

σ[ – ( + )exp(–)] λ( –exp(–λd))

(n– )t – 

+ √

σdexp(–λd)[ – ( + )exp(–)] √

λ( –exp(–λd))

 

·

n

t=

t – ηtet–

+

t=k

Eηtet–

ηk– 

=λσ

dexp(–λd)  –exp(–λd)

n

t= Eet–

+ (n– )

σ

[ – ( + )exp(–)] λ( –exp(–λd))

=σn(θ,σ). (.)

By (.), (.), and noting thatet–andηtare independent, we have

Cov

σ∂n ∂β ,σ

 

∂n

∂λ

θ=θ

= –σ – ( +  )exp(–λd)

λ( –exp(–λd))

 

·E

n

t=

ηt

n

t=

ηt

xt–exp(–λd)xt–

= –σ – ( +  )exp(–λd)

λ( –exp(–λd))

 

t

·

n

t=

xt–exp(–λd)xt–

= . (.)

From (.)-(.), it follows thatVar(Sn(θ)) =σDn. The proof is completed. Lemma .(Maller []) Let Wnbe a symmetric random matrix with eigenvaluesλ˜j(n),

≤jd.Then

WnpI ⇔ ˜λj(n)→p, n→ ∞. Lemma . For each A> ,

sup

θNn(A)

D

  n Fn(θ)D

T

nnp, n→ ∞ (.)

and also

nD, (.)

lim

c→lim supA→∞ lim supn→∞ P

inf

θNn(A)

λmin

D

  n Fn(θ)D

T

n

c

= , (.)

where

n= ⎛ ⎝

λ(–exp(–))

λ(–exp(–))Im

 –σ

n

∂λ |θ=θn(θ,σ)

(12)

Proof LetXn(λ) =X

  n(λ)X

T

n(λ) be a square root decomposition ofXn(λ). Then

Dn= ⎛

⎝ –exp(–λ)X  

n(λ) 

 √n(θ,σ)

⎞ ⎠

·

⎝ –exp(–λ)X

T

n (λ) 

 √n(θ,σ)

⎞ ⎠

=D

  nD

T

n. (.)

LetθNn(A). Then

(θθ)TDn(θθ) =

λ  –exp(–)

(ββ)TXn(λ)(ββ)

+ (λλ)n(θ,σ)≤A. (.)

From (.), (.) and (.),

D

  n Fn(θ)D

T

nn=

W W W

, (.)

where

W=λ( –exp(–))

λ( –exp(–)) X

 

n (λ)Xn(λ)X

T

n (λ) –Im

, (.)

W=

–exp(–)

λX

–

n (λ)(–σ

n

∂β ∂λ)

n(θ,σ)

(.)

and

W=–σn

∂λ –σn

∂λ |θ=θ

n(θ,σ)

. (.)

Let

n(A) =

β: λ  –exp(–)

(ββ)TX

  n(λ)

 ≤A

(.)

and

n(A) =

θ:|λλ| ≤ √ A n(θ,σ)

. (.)

As the first step, we will show that, for eachA> ,

sup

θ

n(A)

(13)

In fact, note that

W=λ( –exp(–))

λ( –exp(–)) X

  n (λ)

Xn(λ) –Xn(λ)

XT

n (λ) =λ( –exp(–))

λ( –exp(–)) X

 

n (λ)(T+TT)XT

n (λ), (.)

where

T=

n

t=

exp(–) –exp(–)

xt–

xt–exp(–)xt–

T

,

T=

n

t=

exp(–) –exp(–)

xt–exp(–)xt–

xTt

and

T=

n

t=

exp(–) –exp(–)

xt–xTt–.

Let u,vRd, |u|=|v|= , and let uT n =uTX

n (λ), vTn =X

T

n (λ)v. By the Cauchy-Schwarz inequality, Lemma . and noting

n(A), we have

uTnTvn=

exp(–) –exp(–)

n

t= uTnxt–

xt–exp(–)xt–

T

vn

≤maxexp(–) –exp(–)

n

t=

uTnxtxTtun

·

n

t=

vTnxt–exp(–)xt–

xt–exp(–)xt–

T

vn

d|λ–λ| · √

nmax

≤tn

xTtX–n (λ)xt

·

C

n

n(θ,σ)

o()→. (.)

Similar to the proof ofT, we easily obtain

uTnTvn→. (.)

By the Cauchy-Schwarz inequality, Lemma . and noting

n(A), we have

uTnTvn= uTn

n

t=

exp(–) –exp(–)

xt–xTt–vn

≤maxexp(–) –exp(–)

·

n

t=

uTnxtxTt un n

t=

vTnxtxTtvn

(14)

n|λ–λ|max ≤tn

xTtXn–(λ)xt

nA

n(θ,σ)

o()→. (.)

Hence, (.) follows from (.)-(.). For the second step, we will show that

Wp. (.)

Note that

εt=ytxTtβ=xTt(β–β) +et (.)

and

εt–exp(–)εt–=

xt–exp(–)xt–

T

(β–β) +σ

 –exp(–) λ

ηt. (.)

Write

J=

 –exp(–) λ

X

  n (λ)

σ

n

∂β ∂λ

= –

 –exp(–) λ

X

  n (λ)

exp(–λd)  –exp(–λd) ·

n

t=

εt–xt+εtxt–– exp(–λd)xt–εt–

 –exp(–) λ

X

  n (λ)

 – ( + )exp(–λd) ( –exp(–λd)) ·

n

t=

εt–exp(–λd)εt–

xt–exp(–λd)xt–

= –

 –exp(–) λ

exp(–λd)  –exp(–λd)X

–

n (λ)(T+T+ T+ T+ T)

 –exp(–) λ

 – ( + )exp(–λd) ( –exp(–λd)) X

n (λ)T, (.)

where

T=

n

t=

xTt–(β–β)

xt–exp(–λd)xt–

,

T=

n

t=

xt–exp(–λd)xt–

T

(β–β)xt–,

T=

n

t=

exp(–λd) –exp(–λd)

(15)

T=σ

 –exp(–λd) λ

n

t=

ηtxt–, T=

n

t=

et–xt–,

T=σ

 –exp(–λd) λ

n

t=

ηt

xt–exp(–λd)xt–

.

ForβNnβ(A) and eachA> , we have (β–β)Txt

= (β–β)TX

  n(λ)X

–

n (λ)xtxTt X

T

n (λ)X T

n(λ)(β–β) ≤ max

≤tn

xTtXn–(λ)xt

(β–β)TXn(λ)(β–β) ≤Amax

≤tn

xTtXn–(λ)xt

. (.)

By (.) and Lemma ., we have

sup

βNnβ(A)

max

≤tn

(β–β)Txt→, n→ ∞,A> . (.)

Using the Cauchy-Schwarz inequality and (.), we obtain

uTnT =

n

t=

uTnxTt–(β–β)

xt–exp(–λd)xt–

n

t=

xTt–(β–β)

 

·

n

t=

uTnxt–exp(–λd)xt–

xt–exp(–λd)xt–

T

un

≤√nmax

≤tn

(β–β)Txt=o(

n). (.)

Using a similar argument asT, we obtain that

uTnT=op(

n). (.)

By the Cauchy-Schwarz inequality and (.), (.), we get

uTnT =

n

t=

exp(–λd) –exp(–λd)xtT–(β–β)uTnxt–

n

t=

exp(–λd) –exp(–λd)

xTt–(β–β)

n

t=

uTnxt–

 

C|λ–λ|

n

t=

xTt–(β–β)

n

t=

uTnxt–

 

CA

n(θ,σ) √

(16)

By (.), we have

VaruTnT=σ –exp(–λd)

λ

n

t=

uTnxt–

=o(n). (.)

Thus, by the Chebychev inequality and (.),

uTnT=op(

n). (.)

By Lemma . and (.), we have

VaruTnT=Var

n

t=

uTnxtet–

=σ –exp(–λd)

λ ·Var

n–

j=

n

t=j+

uTnxtexp –λd(t–  –j)

ηj

=σ –exp(–λd)

λ

n–

j=

n

t=j+

uTnxtexp –λd(t–  –j)

σ –exp(–λd)

λ

max

≤tnu T nxt

·

n–

j=

n

t=j+

exp –λd(t–  –j)

Cmax

≤tn

uTnxtn=o(n). (.)

Thus, by the Chebychev inequality and (.),

uTnT=op(

n). (.)

Using a similar argument asT, we obtain

uTnT=op(

n). (.)

Thus (.) follows immediately from (.), (.)-(.), (.), (.) and (.). For the third step, we will show that

Wp. (.)

Write that

J= –σ

n

∂λ –σn

∂λ

θ=θ

=σ(n– )

λ –

σ(n– )dexp(–λd) ( –exp(–λd)) +

dλexp(–λd)  –exp(–λd)

n

t=

(17)

+dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

εt–

+dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

σ  (n– )

λ +

σ(n– )dexp(–λd) ( –exp(–λd)) –

dλ

exp(–λd)  –exp(–λd)

n

t= et–

–dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd))

n

t=

et–exp(–λd)et–

et–

–dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd))

·

n

t=

et–exp(–λd)et–

. (.)

By (.) and (.), we obtain that

T=σ(n– )

λ –

σ(n– ) λ

= n–  λλ 

σλλ+λσ–σ

=o(n) (.)

and

T=σ

(n– )dexp(–λd) ( –exp(–λd)) –

σ(n– )dexp(–λd) ( –exp(–λd))

= d

(n– )

( –exp(–λd))( –exp(–λd))σ

exp(–λd) –exp(–λd) +exp(–λd)(σ–σ) +exp(–λdλd)

·σexp(–λd) –exp(–λd)+exp(–λd)(σσ)

·σexp(–λd)

 –exp(–λd)+σexp(–λd) –exp(–λd)

=o(n). (.)

By (.), we have

T=

dλexp(–λd)  –exp(–λd)

n

t=

εt––dλ

exp(–λd)  –exp(–λd)

n

t= et–

=d

λexp(–λd)  –exp(–λd)

n

t=

xTt(β–β)

+ xTt(β–β)et+et

–dλ

exp(–λd)  –exp(–λd)

n

t= et–

=d

λexp(–λd)  –exp(–λd)

n

t=

xTt(β–β)

 +d

λexp(–λd)  –exp(–λd)

n

t=

(18)

+

dλexp(–λd)  –exp(–λd) –

dλexp(–λd)  –exp(–λd)

n

t= et–

=d

λexp(–λd)  –exp(–λd) T+

dλexp(–λd)

 –exp(–λd) T+T. (.) By (.), it is easy to show that

T=o(n). (.)

By Lemma ., (.) and (.), we have

Var(T) =Var

n

t=

xTt(β–β)et

=Var

n–

j=

n

t=j+

xTt(β–β)exp –λd(t–  –j)

ηj

=

n–

j=

n

t=j+ xT

t(β–β)exp –λd(t–  –j)

≤ max

≤tnx T

t(β–β)

n–

j=

n

t=j+

exp –λd(t–  –j)

Cmax

≤tn xT

t(β–β)n=o(n). (.) Thus by the Chebychev inequality and (.),

T=op(

n). (.)

Write

dλexp(–λd)  –exp(–λd) –

dλ

exp(–λd)  –exp(–λd)

= d

( –exp(–λd))( –exp(–λd))U, (.) where

U=λexp(–λd) –exp(–λd)–λexp(–λd)

 –exp(–λd). Note that

U=λexp(–λd)exp(–λd) –exp(–λd)

+λexp(–λd) –exp(–λd)+ (λλ)exp(–λd)

 –exp(–λd)

=o(), (.)

so we have

(19)

Thus, by (.), (.), (.) and (.), we have

T=o(n). (.)

By (.), we have

T=dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

εt–

–dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd))

n

t=

et–exp(–λd)et–

et–

=dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd)) σ

 –exp(–λd) λ

n

t=

xTt–(β–β)ηt

+

dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd)) σ

 –exp(–λd) λ

–dexp(–λd)[( –) –exp(–λd)] ( –exp(–λd))

·σ

 –exp(–λd) λ

n

t=

ηtet–

=T+T. (.)

It is easy to show that

T=o(n). (.)

Note that{ηtet–,Ht}is a martingale difference sequence, so we have

Var

n

t=

ηtet–

=

n

t=

Eet–=n(θ,σ).

Hence,

T=o(n). (.)

By (.)-(.), we have

T=o(n). (.)

It is easily proved that

T=dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd))

n

t=

εt–exp(–λd)εt–

–dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd))

n

t=

et–exp(–λd)et–

(20)

=

dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd)) σ

 –exp(–λd) λ

–dexp(–λd)[–  + ( +)exp(–λd)] ( –exp(–λd))

·σ

 –exp(–λd) λ

n

t=

ηt =o(n). (.)

Hence, (.) follows immediately from (.)-(.), (.), (.) and (.). This com-pletes the proof of (.) from (.), (.), (.) and (.).

It is well known thatλ(–exp(–))

λ(–exp(–))→ asn→ ∞. To prove (.), we need to show that

σn

∂λ |θ=θ

n(θ,σ) →p

, n→ ∞.

This follows immediately from (.) and the Markov inequality. Finally, we will prove (.). By (.) and (.), we have

D

  n F(θ)D

T

npIm, n→ ∞ (.)

uniformly inθNn(A) for eachA> . Thus, by Lemma .,

λmin

D

  n F(θ)D

T

n

p, n→ ∞. (.)

This implies (.).

Lemma . (Hall and Heyde []) Let {Sni,Fni, ≤ ikn,n ≥} be a zero-mean,

square-integrable martingale array with differences Xni, and let ηbe an a.s. finite

random variable. Suppose that iE{X

niI(|Xni|> ε)|Fn,i–} →pfor all ε → , and

iE{Xni|Fn,i–} →.Then

Snkn=

i

XniDZ,

where the r.v.Z has the characteristic function E{exp(–ηt)}.

6 Proof of theorems

Proof of Theorem. TakeA> , let

Mn(A) = θRm+: (θθ)TDn(θθ) =A

(.)

be the boundary ofNn(A), and letθMn(A). Using (.) and the Taylor expansion, for

eachσ> , we have

n

θ,σ=n

θ,σ

+ (θθ)T

∂n(θ,σ)

∂θ +

 (θθ)

T∂n(θ,σ)

∂θ ∂θT (θθ)

= 

σn

θ,σ

+ (θθ)TSn(θ) – 

σ(θθ)

TF

(21)

LetQn(θ) =(θθ)TFn(θ˜)(θθ) andvn(θ) = AD

T

n(θθ). Takec>  andθMn(A),

and by (.), we obtain that

P n

θ,σ≥n

θ,σ

for someθMn(A)

P (θθ)TSn(θ)≥Qn(θ),Qn(θ) >cAfor someθMn(A)

+P Qn(θ)≤cAfor someθMn(A)

P vTn(θ)D

 

n Sn(θ) >cAfor someθMn(A)

+P vTn(θ)D

  n Fn(θ˜)D

T

n vn(θ)≤cfor someθMn(A)

P D

 

n Sn(θ)>cA

+P

inf

θNn(A)λmin

D

  n Fn(θ˜)D

Tn

c

. (.)

By Lemma . and the Chebychev inequality, we obtain

P D

 

n Sn(θ)>cA

≤Var(D –

n Sn(θ)) cA =

σ 

cA. (.)

LetA→ ∞, thenc↓, and using (.), we have

P

inf

ϕNn(A)λmin

D

  n Fn(θ˜)D

T

n

c

→. (.)

By (.)-(.), we have

lim

A→∞lim infn→∞ P n

θ,σ<n

θ,σ

for allθMn(A)

= . (.)

By Lemma ., λmin(Xn(θ))→ ∞asn→ ∞. Hence λmin(Dn)→ ∞. Moreover, from

(.), we have

inf

θNn(A)

λmin

Fn(θ)

p∞.

This implies thatn(θ,σ) is concave onNn(A). Noting this fact and (.), we get

lim

A→∞lim infn→∞ P

sup

θMn(A)

n

θ,σ<n

θ,σ

,n

θ,σis concave onNn(A)

= . (.)

On the event in the brackets, the continuous functionn(θ,σ) has a unique maximum

inθ over the compact neighborhoodNn(A). Hence

lim

A→∞lim infn→∞ P Sn ˆ

θn(A)

=  for a uniqueθˆn(A)∈Nn(A)

= .

Moreover, there is a sequenceAn→ ∞such thatθˆn=θˆ(An) satisfies

lim inf

n→∞ P Sn(θˆn) =  andθˆnmaximizesn

θ,σuniquely inNn(A)

(22)

Thisθˆn= (βˆn,λˆn) is a QML estimator forθ. It is clearly consistent, and

lim

A→∞lim infn→∞ P θˆnNn(A)

= .

Sinceθˆn= (βˆn,λˆn) are ML estimators forθ,σˆnis an ML estimator forσfrom (.). To complete the proof, we will show thatσˆn→σasn→ ∞. IfθˆnNn(A), thenβˆn

Nnβ(A) andλˆnNnλ(A).

By (.) and (.), we have

ˆ

εt–exp(–λˆnd)εˆt–=

xt–exp(–λˆnd)xt–

T

(β–βˆn) +

et–exp(–λˆnd)et–

. (.)

By (.), (.) and (.), we have

(n– )σˆn= λˆn  –exp(–λˆnd)

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

= λˆn  –exp(–λˆnd)

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

· xt–exp(–λˆnd)xt–

T

(β–βˆn) +

et–exp(–λˆnd)et–

= λˆn  –exp(–λˆnd)

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

xt–exp(–λˆnd)xt–

T

(β–βˆn)

+ λˆn  –exp(–λˆnd)

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

et–exp(–λˆnd)et–

= λˆn  –exp(–λˆnd)

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

et–exp(–λˆnd)et–

. (.)

From (.), it follows that

n

t=

xt–exp(–λˆnd)xt–

T

(β–βˆn) 

=

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

– 

n

t=

ˆ

εt–exp(–λˆnd)εˆt–

et–exp(–λˆnd)et–

+

n

t=

et–exp(–λˆnd)et–

. (.)

From (.), we get

n

t=

et–exp(–λˆnd)et–

=

n

t=

exp(–λd)et–+σ

 –exp(–λd) λ

ηt–exp(–λˆnd)et–

References

Related documents

For program areas, worker groups or essential services for which competency sets do not yet exist, it is important that training and education is developed in a way that at least

Health sciences and science and technology librarians in New England are very interested in providing DCM services and would like to cultivate a number of the competencies out-

All these economies have experienced rapid growth of their physical capital stock and very high rate of investment in human capital.. On one hand, the supporters of the

– Detailed experiments and theoretical analysis have been carried out to show the performance of our technique against two base-line algorithms which can be used to solve the

The  next  stage  will  be  to  shortlist  tender  submissions  from  suppliers  and  start  the  process  of  kit  evaluation,  meetings 

Therefore, for long-term hypertension monitoring, non-invasive mean using photoplethysmography method is preferred since it enables continuous monitoring without cuff and

Similarly with gatekeeper liability, if corporate auditors, attorneys, and underwriters are made liable for the financial fraud of the corporations to whom they provide

Спосіб тестування загального рівня протромбіну та виявлення його функціонально неактивних форм у плазмі крові / Платонова Т.М., Сушко О.О., Петров О.В., Соловйов