• No results found

Assessing misspecification

In document A treatise on econometric forecasting (Page 154-156)

5.6 Monte-Carlo evidence

5.6.2 Assessing misspecification

In this section, we present Monte Carlo experiments to investigate the ramifications of misspecification in the forecasting problem described in Section 2.4 with independent identically distributed processes and to evaluate the ability of the Taylor algorithm to capture these effects. The paramount assumption made in this chapter, that of indepen- dence of the explanatory variables, is imposed on the simulations that follow. To carry out this endeavor, we construct a benchmark MSFE by means of Monte Carlo simula- tions. This benchmark MSFE is then compared to the MSFE approximation obtained with the Taylor algorithm and given by (5.4.11). For the analysis, we consider several DGPs each of the general form

Yt+1=ϕ(Xt, θ) +Ut+1,

where{Uτ} ∼IIN(0, σu) is an innovation process,{Xτ} ∼IIN(µx, σx), andθis a vector

of parameters. The DGPs considered differ in the functional form of ϕ. The functions we consider are as follows:

ϕ1(Xt, θ) =θ1Xt+θ2Xtθ3,

ϕ2(Xt, θ) =θ4−θ3log[1 + exp(−θ2/θ3−θ1Xt/θ3)],

ϕ3(Xt, θ) =θ1Xt+θ2(Xt+θ3)2+ sin(π(Xt−1)/θ4),

ϕ4(Xt, θ) =θ1Xt+θ2Zt.

(5.6.5)

As described in the previous section, the MSFE cannot be evaluated analytically, so we calculate the benchmark MSFE by means of Monte Carlo simulations. The motivation behind using Monte Carlo simulations to determine a benchmark MSFE lies in the fact that the MSFE is equal to the expected value of the CMSFE. Given a realization of the processes{Xτ}tτ−=1t−nand{Yτ}tτ=t−n+1, it is simple to compute the CMSFE conditional on

the given sample. Generating many such samples,M, by Monte Carlo simulations, we can construct M conditional mean square forecast errors, {CM SF Ei}Mi=1, and approximate the MSFE by the sample mean of the simulations.

We now describe the details involved in the construction of the benchmark MSFE. For the given set of values of the parameters P ={µx, σx, σu, θ} and a particular func-

tional form ofϕfrom the given in (5.6.5), twenty thousand Monte Carlo simulations are conducted (M = 20000). We use the indexmto denote a particular Monte-Carlo simula- tion. For themth simulation, we generate the sample series{xτ,m}T

τ=1 of lengthT = 501

as a realization of the explanatory process{Xτ}tτ=t−n, such that the first element of the

series is the first observation, 1 tn, and the last element of the series is the last observation, 501↔t. Eachx is a realization of a normally distributed random variable, X N(µx, σx), and the population series is independent and identically distributed, {Xτ}tτ−=1t−n∼IID. From this sample series, we calculate the sample series {fτ,m}Tτ=1 by

means of the relationfτ,m=ϕi(xτ,m, θ) for each of the DGPs in (5.6.5).

Finally, with the sample series {xτ,m}T

τ=1, and{fτ,m}Tτ=1, at the forecast origin τ =

T1, we construct the CMSFE as follows:

CM SF Em,n=b2χt,n,m+vχt,n,m, b2χt,n,m= " ft,m−xt,m PT−1 τ=T−nfτ,mxτ,m PT−1 τ=T−nx2τ,m #2 , vχt,n,m=σ 2 u+ σ2ux2t,m PT−1 τ=T−nx2τ,m ,

where b2χt,n,m and vχt,n,m are the conditional squared bias and conditional variance of

the forecast error, respectively. For each simulation, we obtainT1 = 500 values of the CMSFE. One for each value ofn starting fromn= 1 ton= 500. The case n= 1 refers to estimation of the OLS carried out with only one observation. For a particular set of parameters P, we obtain an array of size M ×T −1 of CMSFEs, {CM SF Ei,j}M,Ti=1,j−=11 .

Finally, the benchmark MSFE for a set of parameters P and for an observation window of sizen is given by the following:

M SF En≈ 1 M M X i=1 CM SF Ei,n. (5.6.6)

The benchmark Monte Carlo MSFE is compared with the MSFE approximation ob- tained with the Taylor algorithm given by (5.4.11). The approximation (5.4.11) is con- structed by use of sample moments in place of their population counterparts. For this, we generate the sample series{xτ}Nτ=1 of lengthN = 5000 as a realization of the explana-

tory process {Xτ}tτ=t−n such that the first element of the series is the first observation,

1 ↔ t−n, and the last element of the series is the last observation, 5000 ↔ t. Each x is a realization of a normally distributed random variable, X N(µx, σx), and the pop-

ulation series is independent and identically distributed, {Xτ}tτ−=1t−n ∼ IID. Similarly,

we generate the sample series{uτ}Nτ=1 of lengthN = 5000 as a realization of the innova- tion process {Uτ}tτ=t−n such that the first element of the series is the first observation,

1tn, and the last element of the series is the last observation, 5000t. Eachuis a realization of a normally distributed random variable,U ∼N(0, σu), and the population

series is independent and identically distributed, {Uτ}tτ−=1t−n ∼IID. Finally, the sample

series {yτ}N

τ=1 is generated by means of the relation yτ = ϕi(Xτ, θ) +uτ for each DGP

in (5.6.5).

The population moments in (5.4.11) are estimated by generating their sample coun- terparts. For example:

E[YtXt3−1]≈ 1 N N X τ=1 yτx3τ, E[Yt2Xt21] 1 N N X τ=1 yτ2x2τ.

Therefore, for a given set of the parameters, P = {µx, σx, σu, θ}, we can generate the

necessary sample moments and ultimately evaluate (5.4.11) for different values of the observation window size n. The resulting MSFE can be compared to the benchmark MSFE (5.6.6). In the next section, we discuss results for different sets of values of the parameters involved for the four DGPs given in (5.6.5).

In document A treatise on econometric forecasting (Page 154-156)

Related documents