• No results found

Forecast volatility in value of the EUR/USD

N/A
N/A
Protected

Academic year: 2021

Share "Forecast volatility in value of the EUR/USD"

Copied!
76
0
0

Loading.... (view fulltext now)

Full text

(1)

ZHONG, YANJI (2013) Forecast volatility in value of the

EUR/USD. [Dissertation (University of Nottingham only)]

(Unpublished)

Access from the University of Nottingham repository: http://eprints.nottingham.ac.uk/26749/1/finalthesis12.pdf Copyright and reuse:

The Nottingham ePrints service makes this work by researchers of the University of Nottingham available open access under the following conditions.

· Copyright and all moral rights to the version of the paper presented here belong to

the individual author(s) and/or other copyright owners.

· To the extent reasonable and practicable the material made available in Nottingham

ePrints has been checked for eligibility before being made available.

· Copies of full items can be used for personal research or study, educational, or

not-for-profit purposes without prior permission or charge provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way.

· Quotations or similar reproductions must be sufficiently acknowledged.

Please see our full end user licence at:

http://eprints.nottingham.ac.uk/end_user_agreement.pdf

A note on versions:

The version presented here may differ from the published version or from the version of record. If you wish to cite this item you are advised to consult the publisher’s version. Please see the repository url above for details on accessing the published version and note that access may require a subscription.

(2)

i

University of Nottingham

Forecast volatility in value of the

EUR/USD

YANJI ZHONG

(3)

ii

Forecast volatility in value of the

EUR/USD

By

YANJI ZHONG

2013

A Dissertation presented in part consideration for the degree of

MSc Finance and Investment

(4)

iii

Abstract

Currency volatility is unobservable but plays an important role on the international financial market, especially the EUR/USD volatility. In an attempt to select the most accurate model for forecasting this currency volatility, the EWMA models and the GARCH family models under normal and student-t distributions have been applied to analyze their daily and weekly out-of-sample forecast performance. For this, the parameter of EW MA has been set based on literature evidence, and the GARCH-type models are estimated by daily and weekly in-sample data. With the help of the log-likelihood criteria and two out-of-sample evaluation tests, three conclusions can be drawn. Firstly, a model fitting in-sample well cannot be guaranteed to fit out-of-sample well. Secondly, the relatively high-frequency data is capable for forecasting better than the low-frequency data. Thirdly, there is no clear conclusion about the existence of the volatility leverage effect and the volatility feedback in daily and weekly currency series. However, there is slight volatility persistence in this currency, although only weak evidence has been found.

(5)

iv

Acknowledgement

I would like to express my appreciation to my supervisor, Dr. Thanaset Chevapatrakul. I cannot finish my thesis without his supports and suggestions.

I would like to thank my parents for their encouragement. I also want to thank all of my friends, especially, Christain Sattler who provided much help about R programming.

(6)

v Table of content Abstract ... iii Acknowledgement ... iv Table of content ... v 1. Introduction ... 1

1.1. Overview of the EUR/USD value market ... 1

1.2. Importance of Volatility Forecasting ... 2

2. Literature Review ... 3

2.1. The EUR/USD Exchange Rate Regimes ... 3

2.2. Stylized Facts about Currency Market Volatility ... 4

2.2.1. Volatility Clustering ... 4 2.2.2. Leptokurtosis ... 4 2.2.3. Volatility Spillovers ... 5 2.2.4. Leverage Effect ... 6 2.2.5. Volatility Persistence ... 6 2.2.6. Volatility Feedback ... 7

2.3. Methods Applied to Forecast Volatility ... 8

2.3.1. Standard Deviation Method ... 8

2.3.2. Exponentially Weighted Moving Average Method ... 9

2.3.3. GARCH Family Volatility Models ... 9

2.3.4. Stochastic Volatility Method ... 11

2.3.5. Implied Volatility Method ... 11

2.4. Methods Applied to Evaluate Forecast Performance ... 12

2.4.1. Representative of True Volatility ... 12

2.4.2. Goodness of Fit ... 13

2.4.3. Symmetric and Asymmetric Loss Functions... 13

3. Data ... 15

3.1. The Frequency of Data ... 15

3.2. The Length of Data Period ... 16

3.3. Data Selection ... 17

3.4. Descriptive Statistics ... 18

3.4.1. Normality Tests ... 18

3.4.2. Serial Correlation Tests ... 19

3.4.3. Unit Root Tests ... 20

3.4.4. The Feature of Volatility Clustering ... 22

4. Methodology ... 23

4.1. Volatility Prediction in light of EWMA ... 23

4.2. Volatility Forecast based on GARCH-type models ... 24

4.2.1. Mean Equation ... 27

4.2.2. Testing for ARCH Effect ... 28

4.2.3. Estimation of GARCH Model Parameters ... 29

4.2.4. Diagnose Test ... 29

(7)

vi

4.3.1. Actual Volatility Estimation ... 30

4.3.2. Goodness of Fit ... 30

4.3.3. Symmetric and Asymmetric Loss Function ... 31

5. Finding and Analysis ... 32

5.1. Decision on the Orders of Mean Equation ... 32

5.2. ARCH effects tests ... 32

5.3. Analysis of GARCH estimations ... 34

5.4. Forecast Evaluation ... 44

5.4.1. The Goodness of Fit ... 45

5.4.2. Loss Functions ... 48

6. Conclusion ... 51

6.1. Summary of Findings ... 51

6.2. Limitations and Recommendations ... 52

References ... 54

Data source:<http://www.oanda.com/currency/historical-rates> ... 56

IMF: <http://www.imf.org/external/index.htm>... 59

Appendix... 62

(8)

vii

List of Tables

Table No. Title Page No.

Table 1 Descriptive statistics for daily and weekly currency log-return (Rt)

(In-sample data from 1st August 2007 to 31st July 2010)

18

Table 2 Augmented Dickey-Fuller Unit Root Test Value of test-statistic

20

Table 3 Akaike Information Criteria (AIC) for different orders in ARMA

32

Table 4 ARCH effects 33

Table 5 Daily EUR/USD log-return under normal distribution with ARMA (0, 2)

35

Table 6 Daily EUR/USD log-return under student t distribution with ARMA (0, 2)

36

Table 7 Weekly EUR/USD log-return under normal distribution with ARMA (0, 1)

37

Table 8 Weekly EUR/USD log-return under student t distribution with ARMA (0, 1)

38

Table 9 Maximum Likelihood Criteria 40

Table 10 Maximum R2adj from regressions 45

(9)

viii

List of Figures

Figure No. Title Page No.

Figure 1 Q-Q Plot of Daily and Weekly Log-Returns 19 Figure 2 ACFs of Daily and Weekly Log-Returns 20 Figure 3 Daily and Weekly Log-Returns of the

EUR/USD

22

Figure 4 ACFs based on the second-order of log-returns for both daily and weekly EUR/USD series

(10)

1

1. Introduction

1.1. Overview of the EUR/USD value market

The financial and economic turbulence causes a fluctuation in asset prices and unanticipated movements in EUR/USD exchange rate during 2008 to 2013. Starting with Greece, Ireland, Portugal, Spain and Italy, these euro zone economics borrowed and spent more money than they could afford, which led to the European debt crisis (Lane, 2012). As for the downgrading of government debt, fears among investors toward the development of this crisis occurred. For example, Standard & Poor's slashed Greek sovereign debt rating to BB+ or junk status (Ewing and Healy, 2010). Due to losing confidence in European sovereign bonds, investors decided to withdraw their funds from the euro zone, and invested in safe countries. A large amount of euro was sold, which triggered depreciation of the euro. The EU leaders issued many measures to restore the confidence in the euro, such as a European Fiscal Compact establishment and a €750bn rescue package (Pidd, 2011). Although different kinds of policies have been implemented in the euro zone, an increasing doom and gloom about the euro has never stopped until 2013, such as Cyprus crisis, uncertain Italian political situation, and bad loans over 20% of GDP in Slovenia (Beecroft, 2013). These developments threaten the euro stability.

For the United States, expansionary monetary and fiscal policies have been undertaken to increase money supply, and stimulate economy since 2009. In the first half of 2010, GDP growth (3.1%) in the US was much higher than in the EMU countries (1.7%, IMF). The public reacted positively toward the US development in the next half of the year. They were willing to invest their money in the US. Thus, the increasing demand of USD led to its value strengthening. Nonetheless, The Federal Reserve further relaxed monetary policy by reducing the target for benchmark interest rate (0-0.25%). The average of annual USD Libor rate was just 0.82246% in this period. In August 2010, they purchased large-scale longer-term securities and $175 billion agency debt by lowering term premiums, aiming to put downward pressure directly on longer-term interest rates. A reduction in the level of interest rates contributed to lowering borrowing costs, which provides sufficient liquidity to satisfy the dollar demand. Investors flocked to borrow USD, and converted them to other currency, and invested them into other countries. This means a large number of USD were sold in the international currency market, which resulted in the weakening if the US dollar. It implies that volatility in this currency

(11)

2

market needs to be taken more considerations.

1.2. Importance of Volatility Forecasting

Many methods, such as relative purchasing power parity, interest rate parity and sticky price monetary approach, can be used to forecast the currency value. Although the quality of these predictions is not good enough, to some extent, they can provide currency movement like appreciation or depreciation (Eiteman et al, 2013). However, not only the exchange rates but also volatility is important. Volatility is usually interpreted as the level of risk involved in holding a particular currency. A wise financial decision normally relies on the tradeoff between risk and return (Bodie et al. 2011). Since investors are risk-averse, the volatility forecasting plays an important role in foreign exchange exposure management.

An increase in foreign exchange volatility might result in the changes

in a firm’s profitability, net cash flow and market value. A measure of

these changes is named as foreign exchange exposure (Eiteman et al, 2013). The financial manager is responsible to measure foreign exchange exposure aiming to maximize the firm’s profits. There are three types of foreign exchange exposure, including transaction exposure, translation exposure, and operating exposure (ibid).

Transaction exposure measures the changes in cash flows because of unsettled exchange rates for the existing contractual obligations. For example, a rise in the EUR/USD volatility amplifies the variability of international transactions for American investors whose consumption is denominated in the euro. The cost of hedging foreign exchange risk also increases. It is worth to note that hedging raises a firm’s value only if the gain is large enough to offset the cost of hedging (ibid). Thus, a good understanding of the underlying foreign exchange future volatility is important for the international transaction decision-making process.

Operating exposure measures one of the changes in expected future cash flows arising from an unanticipated change in the exchange rates (ibid). Currency volatility will influence the future sales volume, prices and costs. Take BMW Company as an example, the high USD/DEM volatility reflected the dollar depreciation of 50% against DEM during 1985-1987 (Goddard, 2013). The managers can maintain the DEM revenue by rising USD price. An alternative policy was to maintain the market sales volume by retaining the USD price. No matter which methods they chose, the importance of volatility prediction in the

(12)

3

underlying currency should be realized. It is dramatically helpful to strategically manage the operating exposure (Eiteman et al, 2013).

Translation exposure measures exchange rate impacts on consolidated financial statements resulting from a multinational

company’s need to re-estimate the financial reporting of oversea subsidiaries in the parent’s reporting currency (ibid). Under the current rating method, the reported value of inventory and net plant, equipment, and monetary assets are affected by currency risk. If a manager expects currency volatility to increase, he could minimize translation exposure by reducing net exposed assets. If he anticipates a decrease in volatility and the currency movement, he can benefit from a gain. Hence, an accurate volatility prediction is crucial for appropriate risk management.

Motivated by the uncertainty of the EUR/USD market and the importance of forecasting volatility, this study attempts to propose EWMA model and GARCH framework for forecasting time-varying volatility by means of the relatively low frequency data like daily or weekly data, and then makes comparison on their out-of-sample performance in order to select the appropriate forecasting model for the EUR/USD exchange rate. The rest of this thesis is structured as follows. The associated literature reviews will be shown in the second part. A description of data used for the analysis will be witnessed in the third. The methodology of estimation and forecasts evaluation based on EWMA and GARCH framework will be presented in the fourth part. An analysis of the empirical results will be seen in the fifth part. The conclusion will be made in the final part.

2. Literature Review

2.1. The EUR/USD Exchange Rate Regimes

The Bretton Woods System of fixed exchange rates has been abandoned for more than 25 years. Floating exchange rates, which are determined by the relationship between demand and supply, have been generated at the same time. However, exchange rates might be affected by government interventions. For example, the central banks reduced the cost of dollar currency swaps by 50 basis points aiming to provide sufficient liquidity to satisfy the dollar demand on the end of Nov 2011 (NZZ). It managed to ease strains in dollar markets and weaken the value of USD. The value of euro is also influenced by the European Central Bank (ECB). Hence, the EUR/USD exchange rate regimes should be intermediate regimes, instead of flexibly floating

(13)

4

regimes (Abdalla, 2012).

2.2. Stylized Facts about Currency Market Volatility

Currency volatility reacts to new market information. The information is derived mainly from the macroeconomics of the economies for these two currencies and their government policies. The latter may include wars, government bankruptcy or announcements from central banks, such as an increase in money stocks and the deficit of payments accounting. When shocks take place, some important characteristics of volatility can be detected in the currency markets, including volatility clustering, leptokurtic distribution of returns, volatility spillovers among other foreign exchange markets, volatility leverage effects, volatility persistence and volatility feedback.

2.2.1. Volatility Clustering

Volatility clustering refers to large (small) exchange rate changes being followed by further large (small) changes (Baillie and Bollerslev, 1991). It means that a high volatility at period t tends to remain a high volatility at t+1, and a low volatility at t is likely to keep low in the next period. Since volatility changes result from the arrival of new information of future events, volatility clustering is due to a series of reactions to this new information, called information accumulation (Mandelbrot, 1963). For instance, a huge deficit of Greek government triggered the volatile EUR/USD value because a large amount of euro was sold (Ewing and Healy, 2010). The responses led to high volatility in next few periods.

Strong evidences have been proven the effects of volatility clustering in exchange rates log-returns. Based on ARMA(2,1)-GARCH(1,2), Ravindran et al, (2009) proved the volatility clustering phenomenon for Malaysian Ringgit against USD. Miron and Tudor (2010) also observed this feature in daily USD/CNY value relying on AR(1)-GARCH(1, 1) model. Vlaar and Palm (1993) not only found this feature of volatility in weekly DEM/GBP value during 04/1979 to 03/1991 on basis of the MA(1)–GARCH(1, 1), but also stated that higher volatility is shown in economic recession even though the shocks arrived at the same speed.

2.2.2. Leptokurtosis

Leptokurtosis of currency returns refers to fat tails due to high occurrence of extreme values. There are two explanations for this

(14)

5

volatility feature. According to Hassapis and Pittis (1998), one is that the arrival of information is not uniform in the inefficient market implying infrequent clustering on particular days to generate extreme values. Another explanation is that the rational investors react to abnormal information about future currency changes, like government bankruptcy and policy reform, to form a fat tail of the distribution of currency log-returns.

The existence of conditional leptokurtosis in exchange rate has been proven by Pesaran and Robinson (1993) by applying the student-t ARCH model. In addition, Hassapis and Pittis (1998), reported that the Danish krone, the French franc, the Dutch guilder, and the Swiss franc against the US dollar exhibit a high degree of leptokurtosis in their statistical distribution of exchange rate returns by using the student-t autoregressive model with dynamic heteroskedasticity.

Furthermore, Wang et al, (2001) found that currency volatility displays not only leptokurtosis but statistical skewness based on the GARCH-EGB2 model. There is no fixed conclusion about the properties of the conditional distribution of currency returns. It should depend on the specific time series.

2.2.3. Volatility Spillovers

Volatility spillovers indicate that unexpected movements to currency volatility in one market transmit to current and future volatility in other currency markets (Bubak et al, 2011) since internationally financial market co-movements tend to be stronger today. A rise in foreign exchange volatility might further destabilize other currency volatility.

For example, Melvin and Melvin (2003) uncovered the presence of volatility spillovers effect across German and Japanese currency market. Moreover, the European debt crisis seriously affected the EUR/USD volatility, and also deeply influenced the volatility of the GBP/USD and the SWX/USD, not excluding European emerging markets such as Czech and Hungarian currencies in terms of the correlation function (Bubak et al, 2011). Further, Kearney and Patton (2000) proved that the co-movements in weekly currency series is less than that in daily data because weekly data tends to be more stable than daily data by employing a series of multivariate GARCH models.

(15)

6

of volatility transmission effects between four foreign exchange rates (GBP, JPY, DEM, and CHF against USD), even though they had taken hourly data into account. In this thesis, only one currency is the investigated object without considering other multi-variables. Thus the volatility spillover effects cannot be examined.

2.2.4. Leverage Effect

Volatility leverage effects refer to asymmetric responses for the good news and bad news in financial markets (Alexander, 2008). It can be inferred that positive and negative shocks have different effects on currency volatility. For example, negative shocks arising from strengthening USD with weakening euro might be more destabilizing than positive shocks resulting from the appreciation of euro and the depreciation of USD. Conversely, it is possible that positive shocks create more turbulence than negative ones for the currency market.

Aksay et al, (1997) concluded that negative shocks from the strength of USD, the volatility of TL/USD became higher during 01/1987 to 03/1996 by utilizing an Exponential GARCH in mean (EGARCH-M) model. Abdalla (2012) proved the existence of leverage effect for 19 currencies by using EGARCH (1,1). He also found that negative shocks cause higher volatility in the next period than positive shocks due to depreciation. Furthermore, McKenzie (2002) attributed the impact of negative shocks and positive shocks on the volatility of the USA/AUD to the intervention activity of the Reserve Bank of Australia. He found that large sales of foreign reserves (negative shocks) by the central bank might be more volatile than a purchase of foreign reserves (positive shocks) by means of a TGARCH model.

Nevertheless, Baharumshah (2007) indicated that not all Asian currencies exhibited the asymmetric effects in their conditional variance before the Asian financial crisis by means of a EGARCH (1,1) model.

2.2.5. Volatility Persistence

Volatility persistence refers to some time series whose volatility shocks decay at a slow rate as the lag increases, called long memory process (Tsay, 2005). It can be interpreted as saying that one shock keeps affecting currency volatility at long term. This means that the exchange rate volatility has long memory about this event.

(16)

7

market and the YEN/USD market because of the outperformance of IGARCH in fitting the data. However, Beine and Laurent (1999) concluded that there is no significant evidence to prove volatility persistence in four major daily exchange rates (DEM, FRF, YEN, GBP against the USD) during 1980 to 1996 by employing student-t ARIMA with IGARCH.

Vilasuso (2002) uncovered currency volatility not only from the long–term shock effect, but also from permanent shock influence (i.e. infinite memory about this shock) as he found that FIGARCH exhibits better performance than IGARCH. Pong et al. (2004) argued that one shock influences currency volatility at long-term instead of infinite period by examining exchange rates of the dollar against the pound, the mark and the yen based on the comparison between FIGARCH and IGARCH. They indicated that the shock effect on currency volatility will disappear at sufficiently long time.

2.2.6. Volatility Feedback

Volatility feedback refers to the relationship between volatility and returns. A positive relationship means that an increase in returns leads to an increase in volatility, and this increasing volatility leads to a rise in returns. For instance, a rise in EUR/USD log-returns resulting from strengthening euro with weakening USD might generate high instability among a large number of traders holding USD. Further, a large amount of USD tends to be sold because of risk-aversive traders and floating exchange rate behaviors, which results in a further decrease in the value of USD again (i.e. an increase in EUR/USD log-returns).

According to previous literature, this effect is widely prevalent in the stock markets. It usually exhibits a positive relationship because high risk demands a higher return as compensation, named the risk premium (Bodie et al. 2011). In fact, the volatility feedback effect is also present in currency markets. Tai (2001) significantly found the occurrence of time-varying volatility feedback for the JPY, HKD, SGD and MYR (all against USD) by means of a multivariate Garch-in-mean model. In addition, a positive relationship between currency risk and returns has been proven in the Australia, New Zealand, Norway, Brazil, and Iceland commodity currency markets by Chen et al. (2010) based on statistically significant evidence on the negative volatility feedback effect.

(17)

8

2.3. Methods Applied to Forecast Volatility

The empirical literature on the forecast of currency volatility focused on capturing its characteristics. The five methods for modeling and forecasting are standard deviation, exponentially weighted moving average, general autoregressive conditional heteroskedasticity family, stochastic volatility, and implied volatility. This part will provide a general overview of each method based on their specific advantages and disadvantages.

2.3.1. Standard Deviation Method

For convenience, volatility is calculated by the classic standard deviation log returns of underlying assets in conventional markets.

More evidences of its application can be found in Cushman (1988), Darby et al (1999), Görg and Wakelin (2002). This assumes that volatility is constant over the period. However, Covrig and Low (2003) argued that the predictive ability of standard deviation is very poor for the short forecast horizon. It mainly causes three problems.

First of all, this method is based on the assumption that the individual period returns follow independent and identical distributions (i.i.d.). However, many model returns do not accommodate this assumption, such as stochastic option pricing models and auto-regression models (Alexander, 2008). It is unrealistic to assume that the true volatility uniformly corresponds to the sample error. It should change over time, instead of just being a constant.

Secondly, long distance events in the sample period have the same effect as recent ones in this method, even though the event has passed and the market has returned to normal condition (Hull, 2012). The estimated volatility is artificially high at the next periods as long as the infrequent event has taken place (ibid).

Thirdly, not all data sets are normally distributed. Positive skewness might overestimate volatility since extreme positive values deviate from the mean, the standard deviation increases (Bodie et al., 2011). Inversely, negative skewness causes volatility underestimation. As for excess kurtosis, the standard deviation might underestimate the frequency of the extreme values. Hence, volatility cannot be solely defined as the standard deviation of returns.

(18)

9

2.3.2. Exponentially Weighted Moving Average Method

The EWMA method is to refine the shortcomings of standard deviation method by giving greater weights to more recent observations and less weights to far distant ones (Hull, 2012). In this way, it not only increases the impact of new information on the forecast volatility, but also reduces the ghost effects effectively. Moreover, the current volatility estimate in the EWMA just depends on the last estimated volatility and the return calculated by the most recent observations. So it just needs a small number of observations, which is an attractive feature (ibid).

For instance, Bystrom (2002) showed that EWMA performs better in accurate volatility forecasts than the historical standard deviation and 20-day moving average model by analyzing five currencies from 02/01/1990 to 30/04/1999. Based on the analysis of different kinds of loss functions, Gonzalez-Rivera et al. (2004) proved that a simple EWMA, which does not require parameter estimation, is as well behaved as other more sophisticated models. However, Covrig and Low (2003) argued that EWMA behaves worse than the implied volatility in predicting three currencies (USD/JPY, AUD/USD and GBP/USD) during 05/06/1996 to 25/04/2000.

Although EWMA is easy to calculate due to not needing parameter estimation, there are three obvious drawbacks. Firstly, the only parameter for EWMA is inflexible and reacts to market changes slowly (Alexander, 2008). Secondly, EWMA volatility prediction shows a constant term structure as the forecast horizon increases. Thus there is no point to make further forecast. Thirdly, EWMA lacks long-term variance. So its prediction fails to reverting unconditional variance.

2.3.3. GARCH Family Volatility Models

The ARCH model suggested by Engle (1982) and the GARCH model suggested by Bollerslev (1986) were designed for modeling financial time series heteroskedasticity. It is capable to capture the properties of volatility clustering and leptokurtosis. Afterwards, an immense family of GARCH models (e.g., AGARCH, EGARCH, TGARCH, IGARCH, FIGARCH) has been introduced to capture different features of volatility.

Bollerslev (1986), Taylor (1987) and Hull (2012) pointed out that GARCH(1, 1) is regarded as the most popular and widely employed GARCH model in previous literature. Besides, according to Engle

(19)

10

(2001), a higher-order model with additional lag terms can model long horizon data well, like several decades of daily data or a year of hourly data. Kumar (2006) proved that GARCH(4,1) and GARCH(5,1) outperform GARCH(1,1) in India currency volatility estimates.

Comparing with the ARCH(q) model, Engle and Kraft (1983) proved that a simple GARCH model forecasts quarterly data better than a ARCH(8) model because the ARCH model fails to capture long lags in the shocks with fewer parameters. Balaban et al. (2004) also thought ARCH-type models display the worst predicting power, as substantiated by an analysis of daily data. In addition, the restrictions imposed on the parameters of the ARCH variance equation are quite severe to ensure the series has a finite fourth movement (Brooks, 2008). Nevertheless, Brailsford and Faff (1996) found that ARCH-type models have a superior volatility prediction in Australia stock market. Hansen and Lunde (2001) estimated 330 different GARCH-family models and concluded that ARCH(1) significantly outperforms in predictive ability, whereas GARCH models lack strong evidence to prove their forecasting power by means of daily exchange rate data (DEM/USD). So even through the GARCH model is a generalization of the ARCH model, its forecasting performance needs further analysis.

Comparing it to the EWMA model, Walsh and Tsou (1998) pointed out that GARCH models have a better forecasting performance as shown by analyzing hourly, daily and weekly data. Ederington (2004) also found that the quality of EWMA volatility forecast is less accurate than GARCH(1,1) by assessing daily EUR/USD value and daily YEN/USD value from 01/04/1971 to 12/31/2003. McMillan and Speight (2004) arrived at the same conclusion by an extensive analysis for 17 daily exchange rate series. Further, West and Cho (1994) stated that although the better model for longer horizon prediction is hard to be chosen, GARCH model tends to slightly outperform EWMA in longer horizon forecast like weekly prediction on basis of examining five weekly currencies during 1973-1989. As a result of no clear conclusion about which method have higher forecasting ability for recent EUR/USD value, the performance of EWMA and GARCH models is worth to be evaluated based on the same forecast horizon.

Although GARCH models have been widely used for predicting volatility, they suffer from many drawbacks (Brooks, 2008). Firstly, the non-negativity restriction imposed on the parameters to assure positivity of the variance may be inaccurate. Secondly, it is unable to capture the leverage effect, volatility persistence, volatility feedback, volatility spillover (Tsay, 2005 and Hill et al, 2008). These limitations

(20)

11

of GARCH models lead to the birth of the extension models, such as TGARCH, IGARCH, and GARCH-in-mean, which have been discussed in part 2.2..

2.3.4. Stochastic Volatility Method

Based on the GARCH variance equation, the stochastic volatility method adds an error term in the conditional variance equation which follows a Gaussian white noise process (Brooks, 2008). In addition to the shock influenced by given available information, this model contains a second shock term.

Melino and Turnbull (1990) uncovered that the stochastic model not only results in a better fitting performance for the Canada-U.S. exchange rate, but also enhances the forecasting power.Heynen and Kat (1994) found that the best volatility forecast model highly relies on the examined assets by comparing random walk, GARCH(1,1) and EGARCH(1,1) with stochastic model based on different kinds of assets. They proved that the stochastic volatility model can forecast stock index better than currency volatility.

2.3.5. Implied Volatility Method

Implied volatility model introduced by Latane and Rendleman (1976) for forecasting options volatility is based on option prices observed in the market instead of the historical prices of currency. It also considers other useful information, such as past events, investor behaviors, and expected events which may not be considered by other models (Thanh, 2008). It can be obtainable by given option-pricing models such as Black-Scholes formula. According to Hull (2012), it is beneficial to evaluate the market’s expectation of volatility, whereas volatility based on historical data is considered as backward looking.

As a result of including rich information for predicting, Pong et al. (2004) found that the implied volatility method outperform in forecasting the pound and the yen volatility for longer horizons (one or three months) compared to GARCH, ARMA, ARFIMA based on the regression criteria (the adjusted R2). Even though it contains related information about future volatility, Fleming (1998) proved that implied volatility overestimates volatility.

(21)

12

2.4. Methods Applied to Evaluate Forecast Performance

Fitting a volatility specification based on in-sample data is to predict the risk in the future. However, a good in-sample performance cannot ensure a good out-of-sample predictive power (Hansen and Lunde, 2001). Thus a good forecasting model must not only capture the past data properties but also provide the most accurate forecasts.

2.4.1. Representative of True Volatility

The measure of true volatility should get initial attention since volatility is not observable, unlike return. Particularly, different data intervals have different measures.

With respect to daily data, Ederington and Guan (2004), Bystrom (2002) and Vilasuso (2002) replaced squared true volatility with the square of the daily exchange rate return since return is the unique observation. Additionally, Davidian and Carroll (1987) applied the absolute value of daily returns to represent the true volatility.Since it is widely used the squared return to measure the true volatility. Thus the absolute return measure will not be used in this thesis.

Nonetheless, these measures as substitutes for the actual volatility are debatable. Only if the log-return is an i.i.d. process with zero mean, the squared return can represent true volatility consistently (Alexander, 2008). Actually, with the existence of autocorrelation in most financial series data, this assumption is hard to be satisfied. Andersen and Bollerslev (1998a) indicated that the unobserved variance is measured by out-of-sample squared returns, which is very noisy. Hansen and Lunde (2001) supported their opinion and stated that poor out-of-sample performance led to incorrect model selection. McMillan and Speight (2004) also indicated the poor forecasting ability might not result from the GARCH model per se but from a failure to measure the true volatility. All of them agreed that ex-post squared returns should be substituted with realized volatility. Specifically, realized volatility can be computed by the cumulative squared returns from intra-day which are very high frequency data such as five-minute returns over a day (Andersen and Bollerslev, 1998a). Since the data source of the high frequency data is difficult to access, rather than calculating realized volatility, the out-of-sample squared returns will still be taken as measure of the unobservable variance in this thesis, even though these biased estimators might weaken the forecasting ability of models.

(22)

13

Considering weekly data, Day and Lewis (1992) introduced two ways to measure true variance, covering simply squared weekly returns and squared daily returns multiplied by the number of trading days per week. However, the former one keeps far away from the zero mean assumption. The later one results in rough measures. Besides, Pagan and Schwert (1990), Franses (1996) used the square of de-meaned weekly returns to measure true variance.

2.4.2. Goodness of Fit

An analysis on the goodness of fit is a form of out-of-sample test. The regression (R2) test provides a direct indication of the goodness of fit for different out-of-sample regressions (Alexander, 2008). Wang et al, (2001) proposed a regression and adopted it to evaluate the goodness of fit, aiming to analyze forecasting ability of GARCH specifications under the estimated EGB2 distribution for six currencies. Pong et al, (2004) took advantage of the R2 statistic to make forecast comparisons between the GARCH(1, 1) model and implied volatility based on three currencies during the out-of-sample period 01/1994 to 12/1998. More applications of this method can be found in Pagan and Schwert (1990), Day and Lewis (1992).

With respect to the encompassing tests introduced by Fair and Shiller (1990) with more than two explanations, this is often in order to analyze whether forecast information from one model differs from the information derived from another model. It was widely used to compare the out-of-sample forecast performance between the GARCH model and implied volatility method (Day and Lewis, 1992, Covrig and Low, 2003, Pong, 2004). Without currency option data in this thesis, this approach will not be applied.

2.4.3. Symmetric and Asymmetric Loss Functions

In addition to using the goodness of fit tests, the loss function or utility function is an alternative criterion for selecting accurate forecasting specification based on the out-of-sample window (Bollerslev et al., 1994, Diebold and Lopez, 1996, Lopez, 2001). There are many different types of loss functions based on different interpretations. Not surprisingly, various loss functions will draw different conclusions.

Lee (1991) applied the root mean square error (RMSE) and the mean absolute error (MAE) to analyze the out-of-sample performance for the ARMA-GARCH process on 5 currencies (350 observations in each currency as the out-of-sample). Vilasuso (2002) compared the

(23)

14

forecasting power of the FIGARCH model for six currencies with that of either the GARCH or IGARCH model by using a mean square error (MSE) and a mean absolute error (MAE) criterion in terms of out-of-sample daily exchange rates over 11 years. He also took advantage of a statistic test introduced by Diebold and Mariano (1995) to judge the difference in forecast accuracy between the above models. Pong et.al (2004) adopted the MSE criterion to investigate three currencies for four forecast horizons on two frequencies by ranking these 24 cases. Franses (1996) also rank their results for 25 cases based on the MSE test. Hansen and Lunde (2001) applied seven different kinds of loss functions1, aiming to judge the forecasting ability of 330 GARCH models by using 260 days out-of-sample DEM/USD data. Interestingly, they found that the outcomes from MSE criterion are similar to that from the regressions which are used to test the goodness of fit, so this criterion will not be discussed in the rest of thesis.

However, one obvious limitation of the above loss functions (e.g., MAE and RMSE) is symmetry. They put equal weights to both over- and under-estimation. In reality, it is unlikely for the currency investors to treat over- and under-estimations of volatility with a similar risk magnitude. For example, currency put option price usually moves the opposite direction as exchange rate. An American trader will receive an amount of euro in the next 3 months. If he worried about the weakening euro with the strengthening USD, he might buy currency put options in order to sell euro at high price in 3 months. An over-estimation of the EUR/USD volatility will be of greater concern to him than the put option seller since he has to pay the fixed premium. Only if the gain is higher enough to offset the amount of premium, he can earn profits. In contrast, under-estimation tends to be more cared by the put option seller. Currency volatility gives a clear guidance to its derivatives. Therefore, it is advisable to take the asymmetric loss function, called mean mixed error (MME), into account (Brailsford and Faff, 1996, Christoffersen and Diebold 1996).

Nevertheless, Covrig and Low (2003) argued that classic assumptions of loss functions are ignored by asymmetric loss functions, and suggested a new method named the HLN2. This method, introduced by Harvey et al. (1997), relaxes some restrictions on the forecast errors, such as Gaussian distribution, zero mean and serial non-correlation.

1

Seven kinds of loss functions include MSE1, MSE2, PSE, QLIKE, R2LOG, MAD2, MAD1.

2

The HLN statistical test is distributed as student-t with (T-1) degrees of freedom. The null hypothesis of this test is no difference between variance forecasts and squared true

(24)

15

3. Data

Data generation is an initial step for modeling and forecasting. Two decisions need to be made to guarantee the adequacy of results (Alexander, 2008). One is the frequency of observations. The second one should be the total length of the historical data period and the length of subsample (i.e. in-sample and out-of-sample).

3.1. The Frequency of Data

High frequency data is usually used to estimate a short term volatility model, while low frequency data is used to model relatively long term volatility. Andersen et al. (2003) indicated that models specified for daily data generally fails to capture the movements of longer period data. For example, daily data is more suitable for daily volatility estimation than for 10-day period volatility estimation, because the application of the square-root-of-time rule might create a rough result.

In addition, the purely statistical opinion that the data frequency should be as high as possible to fully reflect the given information is debatable. The availability of data for increasingly shorter return horizons has shifted the attentions from modeling at weekly and daily frequencies to minute-by-minute frequencies. Pong et al. (2004) proved that the predictive quality is improved by using very high-frequency exchange rates by comparing with low frequency data. Although it has been proven to respond quickly to new shocks and short-term dynamic effects, the statistical properties of this sample might be influenced by the noise component (Oomen, 2006). The effect of the market microstructure noise resulting from the transaction process (e.g. bid-ask spreads, asynchronous trading and market closure) can lead to high peaks, thick tails, and skewness of the currency returns distribution (McGroarty et al, 2005). It does not accommodate the normal distribution assumption, which enhances the difficulty of estimating a forecasting model.

Furthermore, it is interesting to find that, according to literature evidence, the forecasting quality of GARCH models is likely to be affected by the data frequency. Wang et al, (2001) implied that intervals chosen might directly affect the forecasting ability of specifications. As a result, daily and weekly data will be used to assess the forecasting performance under conditional normal and student-t residual distribution.

(25)

16

3.2. The Length of Data Period

Long data periods cannot ensure the accuracy of model estimation because of the lower relevance to current information (Figlewski, 2004). The whole sample covering the period from 1st August 2007 to 31st July 2013 (6-year data) is to be the investigated object in this thesis. It is worth to note that the Global Financial Crisis and the European debt crisis broken out in succession during this period, which might affect the statistical features of conditional error distribution. It is beneficial to propose an appropriate forecasting model representing the EUR/USD value in this complicated currency market.

Additionally, the sample sizes of in-sample and out-of-sample are supposed to be decided based on previous literature. Most of the researchers used 5 or 6 years observations as in-sample data in order to estimate currency volatility models. Enough evidence can be found in Franses (1996), Hansen (2001), Pong et al (2004), McMillan and Speight (2004), Bubak et al, (2011). However, Vilasuso (2002) used 19 years of data as the in-sample. When it comes to the out-of-sample size, this decision is more contentious. Some of the sub-sample sizes cover between 1 and 2 years as shown by evidence in Bubak (2011), Vilasuso (2002), Hansen (2001), and McMillan and Speight (2004). Alexander (2008) is consistent with this idea and recommends that weekly data over five-year period to estimate weekly risk over one year might be suitable. This might reduce the estimated standard errors. However, some of choices of out-of-sample lengths are roughly equal to the corresponding in-sample lengths, such as in Lee (1991), West (1995), Franses (1996), Pong et al. (2004). They insist on equal importance for both sub-samples. Surprisingly, some of the sizes of the ex-post sample are much larger than that of the in-sample, like in Bystrom (2002), who seems to focus on evaluating the out-of-sample performance in volatility forecasts for four currencies by using 500 daily data as in-sample, and 1800 daily data as out-of-sample. In sum, I care about not only the performance of fitting but its predicting ability as well. Hence, 3-year data will be in-sample, the latest 3-year observations will be out-of-sample.

(26)

17

3.3. Data Selection

I use daily midpoint US dollar exchange rate data (USD) for the euro (EUR) over the period from 1st August 2007 to 31st July 2010 (1096 observations in this series) to generate the in-sample data. For out-of-sample generation, the period is from 1st August 2010 to 31st July 2013, also including 1096 observations. Regarding average weekly midpoint exchange rates, for the in-sample window data from 1st August 2007 to 31st July 2010 was collected as well. The number of observation points is much reduced when weekly interval is chosen, in this case to 155 observations. For weekly out of sample construction, the latest 156 observations are from 1st August 2010 to 31st July 2013.

All data were obtained from Oanda. To achieve stationarity, I transform the mid-quoted EUR/USD data by taking the first difference of the logarithm for these exchange rate series (Wang et al, 2001). The expression is:

Rt=[ln(Pt)-ln(Pt-1)]*100,

where Pt is the nominal EUR/USD at time t, Pt-1 is the exchange rate at t-1, Rt equals the percentage change in the logarithmic exchange rate at period t. Specifically, Rt>0 implies EUR appreciation, while Rt<0 indicates USD becoming stronger. Additionally, for daily series, Rt represents currency changes between two successive days, instead of two successive trading days, because I did not drop data for weekends or holidays.

(27)

18

3.4. Descriptive Statistics

Table 1: Descriptive statistics for daily and weekly currency log-return (Rt)

(In-sample data from 1st August 2007 to 31st July 2010)

Daily Weekly Number of Observations3 1095 154 Mean -0.004518 -0.04172 Standard Deviation 0.532074 1.386841 Skewness 0.1753268 0.2264138 Kurtosis 7.742222 5.414603

Jaque-Bera test 1037.808 [<2.2e-16] 40.9856 [1.259e-09] Q test(10) 97.3471 [2.22e-16] 32.1226 [0.000382] Q test(30) 133.0328 [6.217e-15] 61.2736 [0.0006437] Q test(50) 170.1986 [5.107e-15] 99.8786 [3.569e-05] Note: As usual, I denote as statistical significance the 5% benchmark level in this thesis. P-values are reported in brackets.

3.4.1. Normality Tests

Three methods are applied to test normality in Table 1. First of all, a check whether the value of skewness and kurtosis is different from zero and three, respectively, is needed. As the skewness is non-zero and kurtosis is not three, we get a rough indication of non-normality for both the daily and weekly cases. In addition, the kurtosisof the daily series (7.742222) is higher than that of the weekly data (5.414603), which signifies that high-frequency series have a thicker tail than relatively low frequency series. Secondly, a Quantile-quantile (QQ) plot is also shown to examine normality4. According to Figure 1 below, given the EUR/USD daily log-return QQ-plot, the middle points lay closely on the straight line, but the curve turns downward at the left end and upward at the right end. This means that the daily series has a fatter tail than the normal distribution. There is a similar situation for the EUR/USD weekly log-return QQ-plot. More specific, the daily series displays a much fatter tail than weekly series, according to the extent of curving at the end. Moreover, regarding the Jaque-Bera statistic5, the null hypothesis of normal distribution is

3

There are 1096 observations of daily nominal exchange rates and 155 observations of weekly average exchange rates to generate in-samples. One observation has been dropped in each series after taking the first difference of the logarithm. So 1095 daily log-returns and 154 weekly log-returns are shown in each in-sample window.

4

If the sample follows normal distribution, the points should lay alongside the QQ straight line.

5

(28)

19

rejected as the associated p-values much less than 0.05. According to these three tests, it can be concluded that both daily and weekly series are non-normal. It is also proven that the daily series (i.e. higher-frequency data) displays more leptokurtosis than weekly series in accordance with Alexander (2008).

Figure 1: Q-Q Plot of Daily and Weekly Log-Returns

3.4.2. Serial Correlation Tests

A white noise process (or serially uncorrelated) is a weakly stationary process where mean is zero and all autocovariances are zero except for the variance. It is impossible to forecast a white noise series because the future returns are in no way related to historical returns. The Portmanteau test6 is used to check if the autocorrelations of the return series are jointly zero. According to Table 1, it is shown that the p-value of daily series are all less than the significance level of 5% suggesting that the null hypothesis of jointly zero autocorrelations has to be rejected and the log-returns of daily exchange rates is not a white noise process. It also can be interpreted to mean that it is possible to use historical exchange rates to gain information on future value. A similar conclusion also can be drawn for the weekly series

Its joint hypothesis is that the skewness is zero and the excess kurtosis is also zero.

6 Portmanteau Test Q(10)/Q(30)/Q(50) test denotes the LjungBox test for serial correlation with 10th

-order, 30th

-order and 50th

-order, respectively. The null hypothesis is that

(i.e. white noise series). More specifically, Q(m)=T(T+2) is a chi-squared random variable distribution with m degrees of freedom.

(29)

20

because all of the p-values are lower than 5%. In addition, according to Figure 2 below, given the performance of ACFs on both daily and weekly log-return series, some specific lags appear to be significantly correlated (i.e. beyond the 95% confidence level) but the rest fluctuate within a certain range. This implies these lags are significant in the future forecasting. These results from ACFs and from the Portmanteau test are consistent.

Figure 2: ACFs of Daily and Weekly Log-Returns

3.4.3. Unit Root Tests

Stationarity plays an important role in time series analysis (Tsay, 2005). Many financial data series are non-stationary, which leads to rough results and incorrect analysis. Since using non-stationary variables in a regression might easily meet the problem of spurious regression, variables may have a strong and significant relationship, even though they are irrelevant (Granger and Newbold, 1974). A

(30)

21

stationarity condition is imposed over the whole parameter estimation (Francq and Zakoian, 2004). Hence, it is crucial to carry out a unit root test, such as the Augmented Dickey–Fuller (ADF) test, for each variable to check for stationarity before estimating regression.

There are three versions of the ADF test, including no drift, with drift, and with drift & trend regressions7. According to Füss (2008), firstly, a regression with drift & trend should be estimated. The null hypothesis of is tested by using the F-type test of . If this null hypothesis cannot be rejected, next, I still need to estimate the model with drift only. The null hypothesis becomes . If this null hypothesis still fails to reject based on the F-type test of , then it implies that this series should be modeled by the specification without drift. Finally, it is necessary to check the tau-statistic to test stationarity on light of the appropriate specification. Only if the null hypothesis of non-stationarity ( ) is reject, it can be concluded that this series is stationary.

Table 2: Augmented Dickey-Fuller Unit Root Test Value of test-statistic

Daily data

tau3 phi2 phi3 Weekly

data

tau3 phi2 phi3

-21.8552 159.2162 238.8244 -7.0367 16.5142 24.7643

Critical values for test statistics

Daily data 1pct 5pct 10pct Weekly data 1pct 5pct 10pct tau3 -3.96 -3.41 -3.12 tau3 -3.99 -3.43 -3.13 phi2 6.09 4.68 4.03 phi2 6.22 4.75 4.07 phi3 8.27 6.25 5.34 phi3 8.43 6.49 5.47

The results of the ADF test are shown in Table 2. For daily log-returns, the statistic value of phi3 ( ) (238.8244) is much greater than the critical values at all significance level (8.27, 6.25, 5.34), the null hypothesis of can be rejected significantly. A model with a drift and a trend term should be applied. The tau-statistic (-21.8552) in absolute term is higher than the absolute critical value at all significance level (-3.96, -3.41, -3.12). It turns out that the null hypothesis of non-stationarity can be rejected, and it can be concluded that the daily series is stationary. Similarly, the weekly series also exhibits stationarity.

7

No drift: Rt= R v

With drift: Rt= R v

(31)

22

3.4.4. The Feature of Volatility Clustering

According to Figure 3 below, it is witnessed that the variances of daily and weekly log-returns are not always constant. It also exhibits the features of volatility clustering in daily and weekly log-returns series.

Figure 3: Daily and Weekly Log-Returns of the EUR/USD (The first one is daily series; the second one is weekly series)

Given these descriptive statistics, it can be concluded that both daily and weekly log-returns are stationary but non-normal and serial correlated. Moreover, variances of both series are not constant over time suggesting that the standard deviation method cannot measure the volatility for a short horizon.

In addition, although the skewness of both series is positive, they are very close to zero, so this is insignificant. More attention should be paid to kurtosis. Hull (2012) attributed the failure of fully exhibiting leptokurtosis in the conditional mean and variance to the traditional Gaussian assumption which is not flexible enough to explain this statistical feature. He also agreed with most literature evidence that employing the assumption of the student-t conditional distribution can improve the fitted performance of models. However, Wang et al, (2001) argued that the student-t GARCH models tend to under- or over-estimate the third and fourth moments. Hence, for the observed characteristics of thick tails and volatility clustering, the EWMA and GARCH framework will be applied to model daily and weekly series

(32)

23

under the normal and student-t distribution.

4. Methodology

4.1. Volatility Prediction in light of EWMA

In the EWMA method, more weight is given to recent data, while the weight for past observation declines exponentially as time passes. Volatility tends to change stably over time based on the EWMA approach, rather than staying constant (Hull, 2012). It is written as:

,

where is the estimated volatility for t-period, is the estimated volatility for the last period t-1, is the previous period’s return,

is a decay factor with the value between 0 and 1.

The parameter is usually 0.94 for daily data and 0.97 for monthly data (J.P.Morgan, 1996). Considering weekly data, I set =0.97, with support from Harris (2011).

The EWMA indicates that updating the volatility estimate each period is based on the most recent return. Specifically, with the influence of shocks from the actual return at last period, for low , volatility jumps immediately and falls back down soon, while for high , volatility rises slightly and declines at slower rate in later periods.

When it comes to predicting with EWMA, it is easy to produce the one-step-ahead forecast as follow:

.

An underlying assumption of this approach is that the mean of return is zero (Hull, 2012). Then the expectation of the squared return can represent the actual variance, E( )= . Further, for k-step-ahead forecasts, it is shown as:

.

It implies that the best forecast of tomorrow or any period in the future volatility is current estimated volatility, in consistence with the efficient market hypothesis. However, the actual might be greater than in reality. Then any recent dynamics in the actual might be ignored in multi-step ahead forecasts. Finally, volatility forecasts

(33)

24

just keep at the current level and show a flat line. To avoid this situation, rather than using multi-step ahead forecasts, I will just focus on one-step-ahead forecast by taking full advantage of updating returns and re-estimated volatility many times.

4.2. Volatility Forecast based on GARCH-type models

A constant decay factor in the EWMA fails to capture the dynamic changes in the EUR/USD series. A better method to explain time-varying volatility are the ARCH/GARCH models (Engle, 1982). They can fully express the important currency data characteristics of volatility clustering and leptokurtosis (Hull, 2012). In other words, it allows error terms with heteroskedasticity and non-normality. Specifically, heteroskedastic variance (i.e. conditional variance) means that the variance of the error terms are based on squared previous innovations (Engle, 1982). Thus volatility change at every point of time takes the information available at the start of the period analysis into consideration, which is more general than the EWMA. Furthermore, error terms with excess kurtosis can be a conditionally normal distribution or conditional student t distribution. Particularly, the conditional error distribution is usually transformed to conditional distributions of the standardized residuals.

According to Andersen and Bollerslev (1998a), GARCH models are a combination of two equations, the mean equation and the variance equation. The pure GARCH(p, q) model is written as:

,

.

In order to keep stationarity and positive variance of the models, the following conditions are imposed: >0, 0 < < 1, 0 < < 1 and

<1. More specifically, the intercept models the property of mean-reverting. If current volatility is high (low), it tends to fall (rise) over time. Finally, it will converge to long term variance:

,

(34)

25

implies that volatility takes a short time to change, and low implies that volatility is slowly responsive to new information (Hull, 2012). According to arguments mentioned in the literature review, the influence of the orders of the GARCH model on forecasting ability is debatable. GARCH(1,1), GARCH(2,1) and GARCH(3,1) models will be examined their performances.

Two extensions of GARCH models will be set up for the purpose of testing the leverage effects of volatility, the threshold GARCH (T-GARCH) and the exponential GARCH (E-GARCH) models. T-GARCH model uses the leverage term to capture the asymmetric effects (Glosten, Jaganathan and Runkle, 1993). It is written as:

,

,

,

where >0, >0, 0, and 0 to ensure positive variance. When is significant and positive (negative), good (bad) news might give rise to higher volatility in the EUR/USD series. E-GARCH model is another model to reflect the leverage effects. Meanwhile, it relaxes the constraints of the positive coefficients by using log-linear form. It is written as:

,

.

This model replaces squared residuals with standardized residuals. When it shows , positive shocks create greater volatility, while it shows , negative shocks have larger effects on volatility.

Furthermore, it is a widespread view that there is a relationship between risk and returns in the currency market. The GARCH-in-mean model is based on the standard GARCH model. It adds the conditional variance to explain the mean returns aiming to investigate the effect of volatility feedback. If <0, it means there is a positive relationship between currency risk and returns. It is written as:

,

(35)

26

Finally, regarding the predicting power of GARCH models, taking GARCH(1,1) as an example, the 1-step-ahead forecast of the conditional variance is given by:

.

It can be interpreted that the best variance forecast at t+1 is a weighted average of unconditional variance, the estimated variance at t, and the new information in this period that is captured by the latest squared residuals (Engle, 2001). In addition, for the k-step-ahead forecast:

.

As the k becomes larger enough, the forecasting variance will revert to long term variance V. Hence, long-term forecast for GARCH models is questionable (Alexander, 2008).

Instead of taking multi-step forecasts, I will take more consideration about one-step-ahead prediction by making full use of updating returns. Each of tomorrow’s volatility forecasts can be obtained by estimating rolling GARCH models. The underlying idea of this procedure is to take a fixed number of returns to estimate tomorrow’s conditional variance ( ), then delete the first return, and put

tomorrow’s return into the fixed sample, and re-estimate the GARCH

model to generate the day after tomorrow’s conditional variance ( ) (Alexander, 2008). This procedure is repeated until the end of the out-of-sample window is reached, so that I can evaluate the forecasting ability of specific model. Taking the daily EUR/USD value as an example, the size of the fixed size estimation window is 1096 periods8. A conditional volatility forecast on 1st August 2010 can be obtained from the in-sample (1095 observations) estimated specification, and then by abandoning the first daily return on 2nd August 2007, putting the new daily return on 1st August 2010 into the fixed sample to re-estimate the GARCH model, and generating a conditional volatility forecast on 2nd August 2010, and repeating this process 1095 times until getting 1096 conditional daily volatilities.

In order to use the GARCH framework to forecast currency volatility, it is necessary to estimate these models. This involves four steps.

8 The specific size of in-sample and out-of-sample is slightly different. There are 1096 daily

observations and 156 weekly observations to generate out-of-samples. Still 1096 daily log-returns and 156 weekly log-returns are shown in each out-of-sample. Since the initial log-return can be calculated by the exchange rates on the 31/07/2010 and 01/08/2010.

(36)

27

4.2.1. Mean Equation

Initially, constructing a return model (i.e. mean equation) aims to remove any serial correlation in the EUR/USD data (Hull, 2012). If there is a unit root in the series, then they are non-stationary. The ARIMA method should then be carried out. According to Table 2, all the log-returns are stationary (I=0), meaning it is unnecessary to integrate, so the ARIMA model will not of concern. In addition, if these series have ACFs that are relatively small and decay very slowly, then they show the long memory effect (Tsay, 2005). The ARFIMA should then be worked out. According to Figure 2, the ACFs do not decay at very slow speed (i.e. without infinite memory effect). Thus the ARFIMA also will not be considered anymore. In sum, an application of the ARMA model to estimate currency log-returns will be concentrated on.

In the autoregressive (AR) model, the current variable depends linearly on the past variables and the current innovation. In the moving average (MA) model, the current dependent variable is determined by the current and past values of the white noise innovations. The ARMA process is a combination of the AR and MA model. Its sophistication often allows using a low-order ARMA model instead of high-order individual AR or MA models to describe the dynamic structure of data. Consistent with Brooks (2008), all of characteristics of data should be modelled by involving as few parameters as possible. Thus it can decrease the estimated coefficient standard errors and improve the accuracy of the models.

An ARMA process is expressed as ARMA(p, q), where p and q is the orders of the AR and MA processes, respectively (Wooldridge, 2009). For example, the ARMA(1, 1) process can be written as:

,

where <1, and to ensure that is finite. Only under these conditions, this model can be guaranteed to be stationary (Alexander, 2008).

With respect to the order of an ARMA process, even though the partial autocorrelation function (PACF) cuts off at lag p for a stationary AR (p) process, and the autocorrelation function (ACF) cuts off at q for a stationary MA (q) process, the application of the PAC and PACF graphs to decide p and q for a stationary ARMA (p, q) process can be difficult

(37)

28

(Tsay, 2005) since an ARMA series is an exponentially decaying ACF and an exponentially decaying PACF (ibid). Practically, the orders of an ARMA (p, q) model can be decided by information criteria, such as

Akaike’s information criterion (AIC), which is easier and more adequate (Brooks, 2008). More specifically, one of popular information criteria, AIC, is written as 2K-2lnL, where lnL represents the maximized log likelihood, and K is the number of estimated parameters. For lnL, the higher this value is, the better the fit is. For K, the larger it is, the more complex the model is. Hence, the smallest value of AIC, with little parameters and high ln(L), is regarded as the most appropriate order combinations I should choose (Alexander, 2008).

4.2.2. Testing for ARCH Effect

The disturbances ( ) generated from the mean equation are assumed to be a white noise process. Supposing no ARCH effect exists for the error terms, the volatility of these series should be constant, then just the standard deviation can represent it. Nevertheless, for some financial time series, the error terms ( ) might not satisfy the homoscedastic assumption of constant variance (h). The time-varying variance ( ), which might depend on the square of one past error ( ) or more, is called conditional variance (i.e. heteroskedastic variance or volatility clustering effect). Theoretically, a white noise process does not imply independence of and , except for the case of a Gaussian white noise process (Akgiray, 1989). In other words, they might be dependent, even though successive error terms ( ) are uncorrelated.

Therefore, it is critical to check whether the squared residuals ( ) are conditionally heteroskedastic before applying GARCH models. Roughly, if the second-order of EUR/USD data series whose ACFs exhibits non-linear temporal dependence, it indicates that this series exhibits heteroskedasticity, despite the first order series of ACFs without linear dependence (Beine and Laurent, 1999). There are many numerical tests for the presence of ARCH effects, including the Ljung-Box test Q(m) and the Lagrange multiplier test (Engle, 1982). The Lagrange multiplier (LM)9 test will be used via the R software in this thesis.

9The LM statistic is , where T is the number of observations, is the value from this

= regression. It is distributed as chi-squared with q degree of freedom. The null hypothesis is “no ARCH effect”.

References

Related documents

In agreement with studies of other chemical parameters of the modern Amazon River sediment, two main sediment source areas (the Andes and the cratonic Shield)

• meetings with fourteen not-for-profit organisations 4 to explore the demand for impact measurement and investigate areas where access to government-held data and analysis could

industrial potential situated in Odessa region and consists of 7 ports: • Port of Odessa • Port of Yuzhniy • Port of Illichivsk • Port of Reni • Port of Izmail • Port of

This Code applies to all exposures of workers to fibres and dust from synthetic vitreous fibre insulation wools (glass wool, rock wool, slag wool) during manufacture, transport

Apart from Internet access, services like printing (black and white/coloured printing), scanning and reference assistance services especially on how to use the Internet, access

Monitoring/Survey Arthropod pheromones, attractants and minimum risk pesticides can be used and be exempt from registration Mass trapping Arthropod pheromones can be used for

This showed signi fi cant but weak correlations ( Table 3 ) between the GM intensity in the caudate and putamen with global motor severity (MDS- UPDRS III) and three of its

3.2: .adaptTo &amp; OSGi best practices © dadcando.com Implementation API Feature .adaptTo Provides API implementation + AdapterFactory Clean separation. .adapTo is a