• No results found

Volatility Forecasting Techniques and Volatility Trading: the case of currency options

N/A
N/A
Protected

Academic year: 2021

Share "Volatility Forecasting Techniques and Volatility Trading: the case of currency options"

Copied!
24
0
0

Loading.... (view fulltext now)

Full text

(1)

Volatility Forecasting Techniques and Volatility Trading: the case of currency options by

Lampros Kalivas Nikolaos Dritsakis

PhD Candidate, University of Macedonia, MSc in International Banking and Financial Studies,

University of Southampton,

Address: Tsimiski Str, 14, 54624, Thessaloniki Tel.: 0030 974 391844, 0030 310 252803

Email: L.Kalivas@csd.ase.gr

Professor of Econometrics, Department of Applied Informatics, University of Macedonia, Address: Egnatia Str. 156, Mailbox: 1591, 54006,

Thessaloniki,

Tel.: 0030 310 891876, 0030 310 436533 Email: Drits@uom.gr

Abstract

The objective of this is to provide evidence on how foreign exchange rates are moving under time-varying volatility. Our first attempt was to identify the existence of heteroscedasticity. Consequently, we applied widely acclaimed methods in order to estimate future exchange rates under changing volatility. Those methods were based on the theoretical background established by Engle (1982) and Bollerslev (1986). After achieving the first goal, different methods for estimating future foreign exchange volatility, such as implied volatility and historical volatility approaches, were introduced, in order to exploit their information for “volatility trading” on currency options.

1. Introduction

Foreign exchange rates have long become the leading instrument for conducting macroeconomic policy. When floating exchange rates system became dominant in international financial markets (1973), academics and practitioners were involved with accurate predictions of future exchange rates.

In addition to that, a modern investor chooses foreign exchange rates as one of the main components of his portfolio. Thus, foreign exchange rates predictions involve not only the macroeconomic but also the microeconomic point of view. That is why predictions of currency movements have attracted considerable attention in the academic literature. The introduction of several financial derivatives which are related to foreign currency movements, made the above attention more apparent for both market participants and market regulators. Regulators are now keen on trying to predict the “future”. However, it is difficult to achieve an accurate forecast. Initially, researchers had tended to focus upon the mean-return characteristics of foreign exchange market returns. After the international market crash, the emphasis has shifted to focus upon the volatility of these returns. Large changes in price movements have become apparent and some observers have blamed institutional regulatory changes for the increase in volatility.

These concerns have led researchers to examine the level and stationarity of volatility over time. Specifically, research has been directed toward investigating the accuracy of volatility forecasts obtained from various econometric models (Brailsford and Faff, 1996). These econometric models include the autoregressive conditional heteroscedasticity (ARCH) family of models, originally devised by Engle (1982).

Except for these econometric models, there are several other ways of estimating future volatility. Over the last two decades, implied, from currency options, volatility has become both a dominant and an accurate measure for actual volatility.

(2)

Volatility forecasts have many practical applications such as use in the analysis of market timing decisions, aid with portfolio selection and the provision of estimates of variance for use in option pricing models. Thus, it follows that it is important to distinguish between various models in order to find the model which provides the most accurate forecast (Brailsford and Faff, 1996).

The literature provides conflicting evidence about the superiority of each method. One the one hand, some researchers stress that relatively complex forecasts (ARCH and GARCH models and implied volatility forecasts) provide estimations with the best quality. On the other hand, a significant number of published papers proved the superiority of simple or “naive” volatility models, such as historical standard deviation or forecasts drawn from OLS regressions.

The objective of this study is to provide evidence on the ability of each method of estimation to forecast future actual volatility. This is a preliminary scope, as the ultimate objective is to estimate the prevailing future exchange rate value under heteroscedasticity. In order to support this view in real terms, a case study, based on true market values, will be provided. 2. Currency Options Valuation

The original option pricing model was developed by Black and Scholes (1973). Since then, it has been extended to apply to foreign currency options (Garman and Kohlhagen, 1983; Grabbe, 1986). Garman and Kohlhagen (1983) suggested that foreign exchange rates could be treated as non-dividend-paying stocks.

The standard Garman and Kohlhagen foreign currency option-pricing model has the following form: ) ( ) ( ) ( ) ( h t N Ke h N Se P t h N Ke h N Se C t d r t f r t d r t f r − + − − = − − = − σ σ where

( )

(

)

t t r r K S h d f σ σ 2 ln 2 + − + = and,

C: theoretical price of a call option P: theoretical price of a put option S: price of the underlying currency K: strike price

t: time to expiration in years σ: annual volatility

rf: the risk-free rate in the foreign currency

rd: the risk-free rate in the foreign currency

N(x): the cumulative normal distribution function

The Garman and Kohlhagen model has been acclaimed as the most frequently used model (Shastri and Tandon, 1986; Scott and Tucker, 1989; Bardhan,1995; Ritchken, 1996; Bharadia, Christofides and Salkin, 1996; Hull, 1997) and this is the model in which we will refer to, throughout our analysis.

(3)

3. Volatility Measures

Because of the central role that volatility plays in derivative valuation , a substantial literature is devoted to the specification of volatility and its measurement. Modelling volatility is challenging because volatility in financial markets appears to be highly unpredictable (Abken and Nandi, 1996).

The term volatility, usually refers to a statistical measure (usually standard deviation) of dispersion around the mean the mean exchange rate. In other words, volatility is the measure of changeability or randomness of the underlying asset (Schwert, 1990).

Amongst traders, volatility may take different meanings. Volatility could be future, historical (backward-looking), forecasting or expected (forward-looking) or implied (reflected in an option market price). Future volatility is what every trader would like to know, the volatility that better describes the future distribution of prices for the underlying contract. In theory it is this number to which we are referring when we speak of the volatility input into Garman-Kohlhagen model (Natenberg, 1994). The best approximation for the future volatility is the expected volatility. Because expected volatility is difficult or impossible to measure directly, historical volatility is often used as an estimate of future volatility or as a starting point for predicting volatility (Giddy, 1994).

Finally, implied volatility makes an estimation of exchange rate volatility for the aggregate period, from the time of the observation up to the expiration date. The information about this type of volatility may well be exploited by traders by taking the appropriate position in the market. However, traders may occasionally lose because of short-term bad luck. But in the long-run, traders can profit from their investment.

3.1. Historical or Backward-looking Volatility

Although there is a vast number of researchers who have involved with different approximations in historic volatility, the literature can be divided in (a) volatility estimated by the standard deviation computed by continuously compounded or logarithmic asset returns (Shastri and Tandon, 1986; Cho and Frees, 1988; Ritchken, 1996; Hull, 1997) and (b) volatility estimated by the standard deviation of discrete time or arithmetic asset returns (Poterba and Summers, 1986; Gujarati, 1988; Scwhert, 1990; Brailsford and Faff, 1996). Generally, the models of historical volatility make simplified assumptions, such as the stationarity of the mean of returns. However, the mean changes over time. Thus, a moving average is employed. The choice of moving average estimation period is arbitrary (Brailsford and Faff, 1996). Ceteris paribus, more data generally lead to more accuracy. However, standard deviation does change over time and data that are too old may not be relevant for predicting the future. A compromise that seems to work reasonably well is to use closing prices from daily data over the most recent 90 to 180 days. A rule of thumb that is often used is to set the time period over which the volatility is measured equal to the time period over which prediction is to be applied. Thus, if the volatility is to be used to value a three month option, three months of historical data are used (Hull, 1997).

3.2. Forward-looking volatility

3.2.1. Deterministic-volatility models

While the historical volatility of an asset return is readily computed from observed asset returns, it may be an inaccurate estimate of the future volatility expected to prevail over the remaining life of an option. Unlike the other parameters that are important for pricing

(4)

(namely, the spot price, the strike price, the interest rate, and the time to maturity), volatility has to be modelled.

The Garman-Kohlhagen pricing model for European currency options assumes that volatility is invariable, something which constitutes the simplest possible approach. However, many authors indicated the opposite, i.e. that the volatility is time-varying (see Bollerslev, Chou and Kroner, 1992).

Moving beyond the constant volatility assumption, time-variation could take two forms. The first approach assumes that variations in volatility are determined by variables known to market participants, such as exchange rates or interest rates. Models of this type are referred to in literature as deterministic volatility models.

Though there are many techniques for the estimation of volatility (variance or standard deviation), among the most popular have been (a) the rolling regression approach (Officer, 1973; Brailsford and Faff, 1996), and (b) gather the returns data into blocks of time and treating conditional volatility as constant within each block (Poterba and Summers, 1986; French, Schwert and Stambaugh, 1987).

The second approach is the simplest relaxation of the constant volatility assumption is to allow volatility to depend on its past data information in such a way that future volatility can be perfectly predicted from its history. Abken and Vandi (1996) and Brailsford and Faff (1996) suggested a method in which the variance of asset returns is described by the following equation: 2 2 1 t t α βσ σ + = +

The above equation models the volatility in such a way that the future volatility depends on a constant and a constant proportion of the last period’ s volatility (Abken and Nandi, 1996). In addition to that, Natenberg (1994) proposed a forecast by using a weighting method, giving more distant volatility data progressively less weight in the forecast.

The above weighting characteristics is a method which many traders and academics use to forecast volatility. It depends on identifying the typical characteristics of volatility, and then projecting a volatility over the forecasting period. This led to the development of ARCH family models (Engle, 1982). Autoregressive Conditional Heteroscedasticity (ARCH) models for volatility are types of deterministic specifications that make use of information in past prices to update the current asset volatility and can reduce pricing biases that appear in Garman-Kohlhagen model. Thus, ARCH models provide a well-established quantitative method for estimating and updating volatility (Abken and Nandi, 1996).

An ARCH model implicitly creates a measure for one-time-ahead volatility that is actually a weighted average of the past squared average returns, instead of an equally weighted computation of variance. Having in mind that periods of high volatility are followed by periods of high volatility and periods of low volatility are followed by periods of low volatility (Mandelbrot, 1963), ARCH models are able to capture volatility clustering. A very simple version of ARCH models is:

t n t n t o t y y y =α +α1 1 +...+α +ε (1) 2 1 1 2 ( ... ) n t n t o t t = y −α −α y − − −α y− ε (2) 2 1 2 2 ( ) t q i i t i t ω β ε ω β L ε σ = +∑ ≡ + = − (3)

where the equation (l) is a generalised ARMA model and L is the lag operator. The expected value of (2) constitutes the conditional variance of y,. Equation (3) is the Linear ARCH(q)

(5)

model introduced by Engle (1982). The conditional variance should be positive, so the parameters must satisfy that:

0 ..., , 0 , 0 1≥ ≥ > β βq ω

It is essential that q>0 in order for the linear ARCH(q) model to exist. Otherwise, we will have constant variance ω (homoscedasticity). If we assume that 2 2

t t t

u ≡ε −σ , the ARCH(q) model may be transformed into εt2 =ω +β(L)εt2 +ut . The above process is covariance stationary if and only if the sum of the positive autoregressive parameters is less than one, in which case the unconditional variance equals

( )

q t Var β β ω σ ε − − − = ≡ ... 1 1 2 (Bollerslev,

Engle and Nelson, 1994). Therefore, ordinary least squares is the most efficient linear estimator for the coefficients of (1).

The ARCH process explicitly recognises the difference between the unconditional and the conditional variance allowing the latter to change over the time as a function of past error. A natural generalisation of ARCH family models is to allow the conditional variance to change over the time as a function of past conditional variances. These models are known in the literature as GARCH (Generalised Autoregressive Conditional Heteroscedasticity) models and were originally proposed by Bollerslev (1986). The GARCH(p,q) is then given by

2 2 1 2 1 2 2 ( ) ( ) t t p i i t i q i i t i t ω β ε γ σ ω β L ε γ L σ σ = +∑ +∑ ≡ + + = − = − where, 0 , 0 > ≥ q p 0 ..., , 0 , 0 ..., , 0 , 0 1 ≥ ≥ 1 ≥ ≥ > β βq γ γp ω

If p becomes zero then the GARCH (p, q) process is transformed into an ARCH (q) process. If both p and q are equal to zero, error ε is simply ‘white noise’. The volatility follows a t random walk process.

Generally, GARCH models successfully capture volatility clustering (Diebold and Nerlove, 1989). If volatility comes in clusters, then the asset returns or prices may exhibit GARCH behaviour even if the market perfectly and instantaneously adjusts to the news (Engle, Ito and Lin, 1990). Those models can also be modified to allow for several other anomalies, such as non-trading periods and predictable information releases (Bollerslev, Engle and Nelson, 1994).

An early test of a GARCH option pricing model is Engle and Mustafa (1992), who examined S&P 500 index options. Their results show that the GARCH pricing model cannot account for all of the pricing biases observed in the option market.

Engle, Lilien and Robins (1987) extend the basic ARCH framework to allow the mean of a sequence of asset returns to depend on its own conditional variance.

This class of model, called ARCH-in-Mean (or ARCH-M), is particularly suited to the study of asset markets. The basic assumption under this approach is that investors are risk-averse and return-loving. In other words, the excess return or risk premium is an increasing function of the conditional variance of returns (Enders, 1995. Mathematically, if yt is the excess return from holding a risky asset relative to a risk-free asset, µt is the risk premium necessary to hold the risky asset rather than the risk-free asset and σt2 is the conditional variance of εt , we have the following relationships.

t t t

(6)

2 t t κ λσ µ = + where 2 t

σ is the ARCH(q) process:

2 1 2 2 ( ) t q i i t i t ω β ε ω β L ε σ = +∑ ≡ + = −

In their novel work, Engle, Ito and Lin (1990) explain the causes of volatility clustering in exchange rates. They examine intra-daily exchange rates and provide evidence that the volatility do not have country-specific autocorrelation. Instead, they demonstrate that, during the day, there is evidence of volatility spillovers from one market to the next.

Several other studies have applied various extensions of ARCH and GARCH models to estimate changing volatility (variance) in exchange rates. Hsieh (1989) found that the standard GARCH(1,1) and EGARCH(1, 1) (see Nelson, 1991) are extremely successful at removing conditional heteroscedasticity. Other papers have pointed out that GARCH model, compared to other models, tend to make slightly more accurate predictions (West and Cho, 1995).

Several alternative measures to the ARCH model defined above have been employed in order to perform efficient estimations of volatility in asset returns. One such alternative involves the construction of variance estimates by averaging the squared errors obtained with models for the conditional mean over fixed intervals of time. Several authors construct either monthly or annually stock return variance estimates by taking the average of the squared daily returns within the period (month or year). Other researchers have used the difference between the high and low prices on a given day to estimate volatility for that day (Parkison, 1980).

Although these methods are usually quite accurate if one wants to measure volatility at a point in time (Campell, Lo and McKinlay, 1996), they dominated by the implicit assumption of constant volatility over some interval of time (year, month or day). Furthermore, the shorter the time interval the more accurate the prediction of volatility. This implicitly leads to the concept of diffusion processes which will be examine in the following section.

3.2.2. Stochastic volatility models

An alternative to deterministic-volatility approach assumes that the changes randomly or “stochastically”. Randomness means that future volatility cannot be readily predicted using current and past information. In other words, market participants cannot make use of market information from historical prices because it is efficiently incorporated into the current price. This is consistent with weak form market efficiency. These category of models contrasts with deterministic-volatility models in that the source of uncertainty that generates volatility is of different form.

In characterising any process, it is first necessary to specify a time set, t. If observations are recorded continuously during this interval, the process is called continuous. If observations are made periodically, then t consists of a sequence of times and the process is referred to as a discrete process. The most applicable are by far discrete processes. According to discrete processes the period of time t, is partitioned into n time increments of width 0t and the prices are observed at the end of each time increment. In other words, we have n price observations. This sequence constitutes the stochastic process.

Mathematically, a discrete time stochastic process, or generalised Wiener process, can be represented as: t z t y= ∆ + ∆ ∆ α σ

(7)

where z is a random drawing from a standardised normal distribution, ∆y is the return on asset, a and σ are the mean and standard deviation, respectively.

The implication of such models lies on the fact that knowing the asset's return in a specific future point in time, the price of the associated option can be well approximated. In accordance to this implication many authors tried to develop option pricing models based on stochastic volatility (Hull and White, 1987; Wiggins, 1987).

Some authors made doubtful assumptions, such as that the volatility is uncorrelated with the returns of the underlying asset and that the volatility plays no role in the valuation of options (Hull and White, 1987). Wiggins (1987) observed that the estimated option values under stochastic volatility were approximately equivalent to the values derived from Black-Scholes formula.

As far as currency options concerned, Melino and Turnbull (1990) found that the stochastic volatility model dominates the standard option valuation models. In addition, they pointed out that a stochastic volatility model yield option prices which coincide with the observed option prices (market prices). This view is in contrast with the one taken by Chesney and Scott (1989) who presented an empirical analysis of the random-variance model, and found that the random variance model actually provides a worse Ft to market prices than the BlackScholes model using implied standard deviations.

Although stochastic-volatility option pricing models, in some cases, exhibit better forecasting ability than other models, parameter estimation is typically demanding and problematic in terms of processing the huge amount of data. However, for both academic researchers and market participants, no consensus exists regarding the most accurate approach for pricing options or predicting asset returns. in an attempt to find more tractable volatility models, academics have proposed other methods of estimating volatility, such as models accounting for ‘volatility jumps’ and methods of deriving implied volatility which call for prompt and precise application.

3.3. Implied volatility

Generally speaking, historical and forecast volatility are associated with the underlying asset. In our case, these two types of volatility characterise exchange rates. There is, however, a different kind of volatility, called implied volatility, which is associated with an option market value rather with the market value of the underlying asset.

Obtaining implied volatility requires the use of an option pricing model, such as the Garman-Kohlhagen model as far as currency options concerned. It is the volatility we must feed into our theoretical pricing model to yield a theoretical value identical to the price of the option in the market (Natenberg, 1994).

Given the price of the option in the marketplace, and given that all the other determinants of option pricing (i.e. interest rates, spot price, strike price) are held constant, there is only one value which corresponds to the market value.

Though option pricing models cannot be inverted analytically, there are several algorithms for calculating implied volatility and various weighting schemes used to derive a single volatility estimate from the prices of different options.

Apart from “shotgun” methods (trial and error), faster convergence can be achieved if an analytic expression is known for the option's “vega”. Such is the case of Newton-Raphson convergence method which can be easily applied to Garman-Kohlhagen model.

While it is almost impossible to invert the Garman-Kohlhagen model, Newton-Raphson becomes the dominant method among academic and practitioners (Scott and Tucker, 1989;

(8)

Natenberg, 1994; Bharadia, Christofides and Salkin, 1996; Hull, 1997). This procedure can be used to converge on the implied volatility quickly.

If Garman-Kohlhagen model held exactly, then options with different strike prices and expiration dates would be priced in a way to yield the same implied volatility. The phenomenon of different implied volatilities for different strike prices is often called in the literature as "volatility smile" (Natenberg, 1994; Abken and Nandi, 1996). In other words, in-the-money and out-of-in-the-money options tend to exhibit higher implied volatility than at-the-money options.

Furthermore, we cannot rule out the fact that the implied volatilities from the calls and puts may be different. There is reason to believe that the implied volatility of the put option is higher than that of the call because put is a natural hedging instrument and investors use it as an instrument of insurance (“protective put” strategy) (Ncube, 1996).

Even if market makers were to price options according to the established Garman-Kohlhagen model, transactions costs, low liquidity and nonsynchronous trading would cause implied volatilities to lie within a wide range of values.

In order to derive a single volatility estimation from different implied volatilities, several weighting schemes have been implemented. So a single implied volatility is weighting according to equal weights, volume of options traded, “vega” values, open interest or by assigning the greatest weight to at-the-money options.

The most popular method is to apply equal weights to all implied volatilities (Schmalensee and Trippi, 1978). According to Schmalensee and Trippi (1978), Black-Scholes model prices at-the-money options more accurately than the others. Thus, options that were deep in-to-money, deep out-of-the-money did not participate to the weighting scheme.

Another weighting scheme is to apply weights according to the “vega” value of each option. However, this approach could be highly biased, due to the fact that “vega” values do not sum to one. To approximate the above approach, it would be more effective to apply the “vega” value multiplied by the corresponding implied volatility and divided by the corresponding option price as weighting factor.

Although there are different algorithms for the calculation of implied volatility and different weighting schemes for the derivation of a single implied volatility, it is difficult to claim which method should be used. It is a matter of speed and accuracy as far as algorithms concerned and a matter of individual preferences as far as weighting schemes concerned. Consistent with the above opinion were the results of Scott and Tucker (1989). They examined the relative performance of three different weighting schemes for calculating implied volatility from American foreign exchange call options on the British pound, Canadian dollar, Deutschemark, Yen; and Swiss franc. They found no evidence that one of the three weighting schemes is superior than the others.

3.4. Comparison between historical and implied volatility

One of the most important issues at predicting volatility is whether the forecast should be based on historical volatility measures, implied volatility, or some combination of the two. Several authors (Schmalensee and Trippi, 1978; Natenberg, 1994) found that implied volatility estimates are superior to the historical based volatility estimates of any kind at predicting future volatility values. In contrast with this view, Jorion (1988) claimed that implied standard deviations are biased forecasts of future volatility, and are found to be sometimes worse than historical based volatility estimates. However, this phenomenon can be given two possible explanations: either the test procedure is faulty, or option markets are

(9)

inefficient. In order to have a more integrated view about historical and implied volatility, it is preferable to examine the interrelation between them.

Implied volatility can be thought as a consensus' volatility among all market participants with respect to the expected amount of underlying price fluctuation over the remaining life of the option (Natenberg, 1994). On the other hand, historical volatility is an actual variable and it is logical for market makers to use this variable for pricing options.

It is widely known that when historical volatility rises, implied volatility of all options is likely to rise. From the market participants' point of view, some traders may change their volatility forecast in response to changing historical volatility. Consequently, it is logical to assume that the whole market will also change its consensus volatility in response to changing historical volatility. Implied volatility is affected by several other factors, apart from historical volatility, such as government announcements, speculative trading activities and various events that are likely to appear in the future. Isolating historical volatility effects, we can support that when the market becomes more volatile, implied volatility can be expected to rise.

The idea behind the above view is that when historical volatility increases, bid-ask spreads in foreign exchange dealing are widened because of the need of market makers to anticipate the risk. In effect ask price, which must be responsible for pricing foreign currency call options, becomes higher. In turn, foreign currency call options yield higher values because the relationship between spot foreign exchange rates and call currency options is a positive function. Similarly, bid price, which must be responsible for pricing foreign currency put options, becomes lower. In turn, foreign currency put options yield higher values because the relationship between spot foreign exchange rates and put currency options is a negative function. In both cases, higher implied volatilities will be derived from higher option premia. Exactly the opposite will happen if the historical volatility falls.

However, two simplified assumptions were made in the previous analysis. First, we implicitly assumed that, after the impulse from historical volatility, bid and ask prices are moving away from their initial level by the same amount. Second and more important, we assumed that bid and ask prices are responsible for pricing put and call options, respectively. This could not always be the case. Sometimes, middle rates or “fixing” exchange rate quotations are used for pricing both put and call options. If pricing according to middle rates is accompanied by the first assumption, an increase in historical volatility will have no effect on implied volatility. Even if options are priced according to bid-ask prices, this does not mean that options with different strike prices and maturities will be affected the same. Natenberg (1994) examined the mean reverting characteristics of option contracts and found that the further out of time we go, the greater the likelihood that the volatility of the underlying option will return to its mean. Given these characteristics, the implied volatility of long-term options will tend to rise less than the implied volatility of short-term options in response to an increase in historical volatility. As historical volatility falls, implied volatility of long term options will tend to fall less than the implied volatility of short-term options. In the empirical part of this study, the above relationships will be presented diagrammatically.

4. Heteroscedasticity: detection and implications

The variety of opinions about the distributions of foreign exchange price changes and their generating process is wide. Some of them reject any single distribution, while others accept that the distribution process is heteroscedastic (Scott and Tucker, 1989). Such processes were mentioned in previous discussion by examining ARCH and GARCH models. However, all

(10)

forecasting volatility methods examined up to now implicitly assume either heteroscedasticity or homoscedasticity.

In dealing with the problem of heteroscedasticity, it is helpful to identify situations where it is likely to occur. It is most likely to occur when cross-section data are used. We anticipate this fact by using time series data from one country, separately.

However, heteroscedasticity may appear even when time-series are used in a study. The huge amount of data used can still show heteroscedasticity. In the case of foreign exchange rates, we can identify this problem either in non-synchronicity in foreign exchange rates measures or in explicit averaging. Explicit averaging takes place when middle prices are used. This kind of measurement may not correspond to the true foreign exchange rate.

Before using one of the models for predicting exchange rate volatility, the issue of whether heteroscedasticity exists or not should be investigated. Bibliography has suggested several ways in identifying heteroscedasticity.

Glejser (1969) suggested regressing the absolute values of residuals on the independent variable that is thought to be closely associated with the conditional variance in order to find the presence of heteroscedasticity. The residuals were taken from an OLS regression. However, not all functional forms suggested from Glejser are linear in the parameters and therefore cannot be estimated with the usual OLS regression.

Although Glejser (1969) found that all linear regressions, for large samples, give generally satisfactory results in detecting heteroscedasticity; Goldfeld and Quandt (1972) pointed out that the error term taken from the above functional forms has some problems in that its expected value is nonzero, it is serially correlated and heteroscedastic (see Gujarati, 1988). Consequently, Goldfeld-Quandt (1972) tried to eliminate the disadvantages of Glejser test. Their test requires that the observations should be ranked according to their value. Referring to this test one should split the total sample into two subsamples - one corresponding to large values of the independent variable and the other corresponding to small values of the independent variable. Goldfeld and Quandt (1972) fitted two separate regressions for each of the two subsamples and then applied F-test to examine the equality of error variances.

Goldfeld and Quandt suggested omitting some observations between the two subsamples. This must be done in order to increase the ability to discriminate between the two error variances.

However, the above test is effective if samples are small because it is difficult to manipulate huge amounts of data. On the other hand, Goldfeld-Quandt test may be proved ineffective for small samples. This could happen because the estimation of two separate regressions requires twice as many degrees of freedom as a single regression needs. Thus, very small samples with many independent variables cannot be estimated.

Breusch and Pagan (1979) describe a Langrance Multiplier test against the very general alternative that the error variance is a function of some independent variable. It must be pointed out that the above function can be any function. Thus, Breusch-Pagan test does not depend in the functional form.

The Breusch-Pagan test is a test of the null hypothesis that the estimated coefficients of independent variables are all equal to zero. If null hypothesis is held then we have homoscedasticity. Otherwise, heteroscedasticity exists.

Although residuals taken from a regression and the independent variable may be uncorrelated, the squared residuals could be correlated with the independent variable. Thus, if we are using regression procedure to test for heteroscedasticity we should use a regression of residuals or a regression of the absolute or squared residuals on different transformations of the independent variable. In the case of multiple regression (more than one independent

(11)

variables), powers of the predicted dependent variable or powers of all the explanatory variables should be used.

White (1980) regressed the squared residuals on all the explanatory variables and their squares and gross products. In both cases, if coefficients are significant heteroscedasticity is present.

Although volatility is unique, as a concept, it involves many different ways of estimation. The more prevalent types of volatility measurement are: volatility obtained from historical data, volatility evolved stochastically, volatility implied in currency option prices and volatility derived from fitted regressions. Although these regressions could be simple OLS regressions, some interesting innovations have appeared in the literature.

Amongst them, historical volatility appears to be the most straightforward. Stochastic volatility is of great importance because foreign exchange rates movements were proved to be stochastic. However, this type of volatility assumes that volatility is proportional to time. This is not always the case.

Implied volatility and volatility predicted from ARCH or GARCH processes are the most interesting volatility approaches. In this chapter, weighting schemes and algorithms for calculating implied volatilities are presented. As far as ARCH and GARCH processes are concerned, we illustrated original works of many authors in the field (among them Engle, 1982; Bollerslev, 1986; Engle, Liken and Robins, 1987).

Whichever the measurement approach of volatility, it is essential to find whether or not volatility is held constant over time. For this purpose several techniques of estimating the stability of volatility (heteroscedasticity) are presented. As heteroscedasticity is not easily identifiable, it is of great significance to choose the best approximation in each case. However, this is a matter of extensive experimentation. In our case, this approximation was proved to be White's test.

5. Empirical Results 5.1. Data

The first source of our data is Datastream and it is associated with spot foreign exchange rates and interest rates. Our exchange rates are measured as the amount of foreign currency per unit of domestic currency (pound), between the UK and Japan, Germany and USA. All data are daily closing prices. Data concerning interest rates are one-month middle rates. Risk-free interest rates are difficult to define for each country, so, we assumed that short-term Eurocurrency rates can, effectively, substitute them.

Because currency options are relatively recent financial instruments, the limited data span makes it important to fully utilise all the information in the sample. Therefore, daily data are used in the regressions. Our sample consists of 188 observations for each country from January 0l, 1993 to July 22, 1997. When a day was a holiday, data from the day before have used. So, the only missing values are corresponding to weekends.

From the second source, which is Philadelphia Stock Exchange, we derived information about the FHLX Foreign Currency Options pricing history data that is available in high density floppy diskettes.

There are two styles of foreign exchange options traded in Philadelphia Stock Exchange: European and American. Although we should use European style currency options, American style currency options exhibit more liquidity and they appeared more frequently on the trading floor. Moreover, under the assumption that European and American options yield approximately same premia, American options are used to derive implied volatilities.

(12)

In order to derive implied volatilities a separate file is used to store options for each day. For the sake of simplicity, we use a single American $/£ call option for each day from December 26, 1995 to June 13, 1996, totalling 129 observations. After deducting days that no call options had appeared, our observations become 101. In order to compare the effectiveness of implied volatility measures we construct volatility estimates using historical data.

The measure of historical volatility was used, based on spot exchange rates even though a preferred measure would be the average volatility of forward rates, the average being taken over the maturity of the forward/option contract, since that includes information regarding interest-rate volatility. However, this measure is not used since forward rates are not available for the specific maturity months of the options on foreign currencies. This measure of volatility is a non-weighted standard deviation based on the days to expiration of the contract.

5.2. Methodology

The practical relevance of the Garman-Kohlhagen (1983) model as an approximate currency options pricing formula depends on the investor's ability to forecast exchange rate variability over the remaining life of the option. With the recent growth in popularity of the currency options market, variance prediction has become more crucial as a preliminary of predicting exchange rates themselves.

This chapter deals with the procedure we intend to follow in order to examine the level of predictability of foreign exchange rates under changing volatility. Thus, all steps involved in this examination will be described.

First, the OLS regression that fits better the dependent variable (exchange rates) on a set of independent variables is identified. Once this estimation has taken place, the simulated values to the actual values are compared and approximates for the residuals are derived.

Second, an attempt is carried out in order to find out whether these residuals change over time or remain relatively stable. White's test is used as a test for the existence of heteroscedasticity. This model is chosen because it exhibits more reliability compared to Goldfeld-Quandt and Breusch-Pagan tests. It provides more flexibility in the sense that a wider range of regressors or products of regressors can be used to test for heteroscedasticity. In the case that heteroscadasticity is present we will apply the widely approved ARCH and GARCH models proposed by Engle (1982) and Bollerslev (1986), respectively. We will identify which of the several variations of the above models describes better exchange rate movements under changing standard deviation, by utilise in-sample diagnostics for each of the three currency pairs, separately.

Finally, two simple measures of historical and implied volatility are compared in relation to realised or actual volatility. Implied volatilities are calculating from currency options taken from PHLX. Using Newton-Raphson algorithm (programmed in PASCAL) is constructed to shorten time required. On the other hand, historical volatility estimates are derived on the basis of days to expiration. In other words, historical standard deviations are calculated using observations which equal the number of days to expiration. According to Hull (1997) this provides the best approximation of actual volatility, using historical volatility.

In order to examine quantitatively the above relationship, a modification of the root mean squared prediction error (RMSPE) is introduced in the way that was presented by West and Cho (1995). RMSPE model takes the form:

(

)

N RMSPE N j jactual jestimated ∑ − = =1 2 2 , 2 , σ σ where,

(13)

2 ,actual j

σ : the actual volatility realised from the day option is traded up to the expiration

2 ,estimated j

σ : the estimated volatility whether it is called implied or historical N: number of observations

5.3. Results

In order to find out whether or not the volatility of foreign exchange rates is changing over time, residuals’ behaviour should be examined. Before proceeding to more complicated models, residuals from a simple OLS regression should be drawn. OLS regression function has the two more recent lagged values of foreign exchange rates, as regressors.

Fitting our data for Deutschemark, Dollar and Yens, the results were that both coefficients of 7lagged foreign exchange rates are statistically significant in 95% level. Although constant term was found to be statistically significant in only two out of three cases, this is not of great importance. Thus, in the following analysis, we refer to coefficients ignoring constant terms. Not all coefficients have the right sign. Although, foreign exchange rates observed the last trading day (Lag = 1) and current exchange rates are positively related, foreign exchange observed two trading days ago (Lag = 2) is negatively related to current foreign exchange. This adverse movement in foreign exchange might be triggered by a systematic arrival of new information during the time interval of two days.

The same phenomenon seems to appear in Yen, while Dollar tends to behave quite normally, in the sense that exchange rates observed two trading days ago are positively related to current ones. In all three fitted regressions, regressors explain more than 98% of the total variation of the dependent variable.

By applying White’s test, the residuals drawn from those regressions are used to test the presence of heteroscedasticity. We regressed the dependent variable on five possible combinations of independent variables, for each currency up to third lag operator, and we found that all coefficients are statistically significant. Moreover, Chi-Squared statistic is statistically significant in 5% level. Thus, all three currency pairs show heteroscedastic behaviour. Once we have proved the heteroscedastic behaviour of foreign exchange rates, we will proceed to the application of ARCH and GARCH models which make allowances for heteroscedasticity.

Variables Exchange Rates Estimated

Coefficient Standard Error Significance level

Constants Mark/Pound Dollar/Pound Yen/Pound -0.0044 0.0232 0.8817 0.0045 0.0068 0.3467 0.336 0.000 0.011 Lagged Exchange Rates (Lag =1) Mark/Pound

Dollar/Pound Yen/Pound 1.0029 0.9833 0.9958 0.0017 0.0049 0.0019 0.000 0.000 0.000 Lagged Interest Rates (Lag =1) Mark/Pound

Dollar/Pound Yen/Pound -0.0005 0.0006 -0.1136 0.0002 0.0002 0.0093 0.013 0.020 0.000 Homoscedastic term (constant) Mark/Pound

Dollar/Pound Yen/Pound 0.0001 0.0000 1.1157 0.0000 0.0000 0.0421 0.000 0.000 0.000 Lagged Residuals (Lag=1) Mark/Pound

Dollar/Pound Yen/Pound 0.1357 0.1417 0.1640 0.0333 0.0000 0.0342 0.000 0.000 0.000 Table 1: ARCH (1) processes for three currency pairs with two independent variables

(14)

Variables Exchange Rates Estimated

Coefficient Standard Error Significance level

Constants Mark/Pound Dollar/Pound Yen/Pound 0.0000 0.0205 0.8280 0.0050 0.0060 0.3498 0.986 0.001 0.018 Lagged Exchange Rates (Lag =1) Mark/Pound

Dollar/Pound Yen/Pound 1.0007 0.9853 0.9960 0.0021 0.0042 0.0020 0.000 0.000 0.000 Lagged Interest Rates (Lag =1) Mark/Pound

Dollar/Pound Yen/Pound -0.0003 0.0005 -0.1014 0.0002 0.0002 0.0316 0.103 0.064 0.001 Homoscedastic term (constant) Mark/Pound

Dollar/Pound Yen/Pound 0.0001 0.0000 0.0219 0.0000 0.0000 0.0065 0.008 0.000 0.000 Lagged Variances (Lag=1) Mark/Pound

Dollar/Pound Yen/Pound 0.9458 0.9503 0.9289 0.0083 0.0074 0.0123 0.000 0.000 0.000 Lagged Residuals (Lag=1) Mark/Pound

Dollar/Pound Yen/Pound 0.0463 0.0382 0.0545 0.0076 0.0060 0.0089 0.000 0.000 0.000 Table 2: GARCH(1,1) processes for three currency pairs with two independent

Variables Exchange Rates Estimated

Coefficient Standard Error Significance level

Constants Mark/Pound Dollar/Pound Yen/Pound -0.0003 0.0202 0.8381 0.0050 0.0060 0.3489 0.952 0.001 0.016 Lagged Exchange Rates (Lag =1) Mark/Pound

Dollar/Pound Yen/Pound 0.9650 0.9733 1.0252 0.0294 0.0287 0.0304 0.000 0.000 0.000 Lagged Exchange Rates (Lag =2) Mark/Pound

Dollar/Pound Yen/Pound 0.0357 0.0123 -0.0293 0.0294 0.0288 0.0303 0.225 0.670 0.333 Lagged Interest Rates (Lag =1) Mark/Pound

Dollar/Pound Yen/Pound -0.0003 0.0005 -0.1000 0.0002 0.0003 0.0316 0.086 0.068 0.002 Homoscedastic term (constant) Mark/Pound

Dollar/Pound Yen/Pound 0.0000 0.0000 0.0214 0.0000 0.0000 0.0064 0.007 0.000 0.001 Lagged Variance (Lag=1) Mark/Pound

Dollar/Pound Yen/Pound 0.9460 0.9502 0.9296 0.0082 0.0075 0.0123 0.000 0.000 0.000 Lagged Residuals (Lag=1) Mark/Pound

Dollar/Pound Yen/Pound 0.0459 0.0382 0.0541 0.0076 0.0060 0.0090 0.000 0.000 0.000 Table 3: GARCH(1,1) processes for three currency pairs with three independent variables

(15)

5.4. Comparison among the different methods of modelling volatility

Apart from focusing on the residuals of a pure Auto-Regressive Moving Average (ARMA) model, it is possible to estimate the residuals of a standard multiple regression model as ARCH or GARCH processes.

A two-parameter variance ARCH process was implemented in order to ensure the non-negativity and stationarity constraints that might not be satisfied using more parameters. As it has already been mentioned, the necessary and sufficient conditions for the variance function are: 1) the constant or homoscedastic term should be greater than, or equal to, zero and 2) the coefficient of lagged residuals (Lag=1) should be greater than zero and less than unity.

Here, the estimated parameters of the ARCH(1) process, for all three currency pairs, are in line with the above conditions. Examining mark and dollar exchange rates, it is found that estimated values of homoscedastic terms in variance function are approximately zero. Observing yen exchange rates, the same coefficient is both positive and significantly different from zero. On the other hand, coefficients of lagged residuals are all statistically significant. The estimated values of the variance function imply that variance itself is convergent.

Following McCurdy and Morgan (1987), interest rates were also introduced in the ARMA(1) equation, in order to check whether or not interest rate parity is held. The coefficients of this variable have the right sign for Germany and Japan, while in the sign in the case of USA is not consisted with the associated theory. All three values are statistically different from zero. The existence of divergent process, in the case of mark/dollar exchange rates, is certain because the first lag exchange rate coefficient is greater than unity. As far as the other currency pairs are concerned, their estimated values for the same coefficient are ‘dangerously’ close to unity, implying a divergent process. These high values might create faulty exchange rate predictions. Thus, ARCH(1) model, with two independent variables, cannot be considered as a suitable model for predicting exchange rates. In an attempt to reduce the values of the first lag coefficients of the foreign exchange rates, the second lag of foreign exchange rates was introduced. Only in two out of three currency pairs, this action reduces the value of the first lag coefficient. On the other hand, the coefficient of those lagged series (Lag=2) is not significantly different from zero.

Having secure the non-negativity constraints for the two parameters in the variance function, several schemes containing more lagged residual series are applied. Therefore, ARCH(2) and ARCH(3) models are estimated. In both cases, all conditions, that should characterise an ARCH, process are held.

Models which contain a large number of lagged residual series, such as ARCH(4) and ARCH(5), can be better described by GARCH processes. The concept behind these models is that all "high-distance" lagged residuals can be represented by the lagged variance. So, lagged variance enters the variance function.

Applying GARCH(1,1) model for different number of independent variables, there is a strong indication that explains the behaviour of exchange rates better than any ARCH process. However, the values of the lagged exchange rate (Lag=1) coefficients are still ‘dangerously’ close to unity. Inserting the second lagged exchange rate series into the equation, it is found that the estimated coefficients are insignificant in 5% level.

It must be pointed out that both GARCH(l, l) processes, for all three currency pairs, explain more than 98% of the total variation of exchange rates. But, it is not this feature which makes the above estimations important. It is the statistically significant coefficients of both the lagged residuals and lagged variances that makes these .

(16)

Not all currency movements can be explained by a single ARCH or GARCH model. Instead, each currency pair has its own, unique model that describes it better. For example, it can be stressed that mark/pound exchange rates are described better by an GARCH(1,1) process with three independent variables. Yen/pound exchange rates are described better by an ARCH(1) process with two independent variables. Finally, dollar/pound exchange rates seems to show more apparent convergence indications, because the lagged exchange rate coefficient is much less than unity. Although one coefficient is not statistically significant, GARCH(1) process with two independent variables describes dollar movements better than any other model.

5.5. Monte Carlo simulation

This section will test in practice the forecasting ability of each foreign exchange rate pair. Foreign exchange rates were generated by using Monte Carlo method. This method was chosen because it is easy to apply. Estimating foreign exchange rates provides us with the ability to choose among options with different strike prices. For example, if a high foreign exchange rate is expected, the call currency option with the lowest strike price will be chosen. Following this technique, our potential profit will be maximised. The nature of Monte Carlo simulation implies a static simulation. This means that past predicted foreign exchange rates are not treated as the starting point of any future prediction. Instead, realised values from both lagged residuals and foreign exchange rates are used to estimate the forthcoming future value. The following presentation concentrates in the examination of only one type of all the above models as a common basis of comparing all three currency pairs. This model is the ARCH(1) model. This restriction is applied due to the existence of high computational need. In other words it was time consuming to simulate complex models, such as ARCH(3) or GARCH(l,l), in ordinary computers under the ‘environment’ of the existing econometric packages. All simulated foreign exchange rates were derived using RATS386.

A measure of estimating the forecasting ability of ARCH(1) model was introduced for each one of the three currency pairs. This measure is based on the assumption that not all currency pairs have the same exchange rate level. To overcome this problem, percentage deviations are introduced from true values and, consequently, each estimated value is compared with the true value in order to derive the percentage deviation. Percentage deviations are represented by the absolute change in foreign exchange rates in respect to the last day’s value. In turn, the sum of squared deviations is divided by the number of total observations.

Despite our belief that mark/pound exchange rates are described by a divergent process, it is observed that using the modified RMSPE the specific currency pair yields values which are lower than the other two currency pairs. However, the predictive ability estimates of ARCH(1) as far as dollar and mark exchange rates are concerned, are very close to each other.

Moreover, all three currency pairs yield values that are considerably low. This is the best indication that ARCH(1) process, as a whole, describes foreign exchange rates movements effectively.

5.6. Empirical comparison between implied and historical volatility

The volatility of financial markets has long been a favoured subject of study for participants and academics alike. Although the many and varied approaches to the subject over the past

(17)

decades have yet to reach a consensus on how volatility should be modelled, one indisputable fact is that volatility is ‘volatile’ (Fung and Hsieh, 1991).

Up to now, exchange rate volatility was modelled by ARCH and GARCH models. Except for this approaches it is found to be useful to examine two other ways for estimating future volatility. These are implied and historical volatility approaches.

As far as historical volatility is concerned, there is a problem of measurement. A number of studies on this subject (French, Schwert and Stambaugh, 1987; Schwert, 1990) have used standard deviations of daily price changes to study volatility changes in the market.

Although above authors have used daily data to examine monthly historical volatility changes, in this study a slightly different approach will be used. An historical measure of volatility (standard deviation of logged prices) is calculated for each maturity using the same duration of past observations as the period to be forecast (Tan and Dickinson, 1990; Hull, 1997), i.e., 6 weeks daily prices for a 6 weeks expiry date.

An alternative is to use implied volatilities derived from observed option prices to study daily changes in volatilities. This concept will be used to construct our own implied volatility estimates. The implied volatility is calculated at the end of each trading day. Usually, PHLX currency options are quoted in annualised percentage standard deviation rather than price. Therefore, days to expiration input gives us a volatility estimate for the remaining days. In order to derive implied volatilities we used values from call options. These call options were obtained randomly from PHLX currency option diskettes. This was done in order to secure that our predictions are unbiased.

In order to derive implied volatilities, a self-constructed programme was used, created in “Turbo PASCAL 6.0” programming language. This programme is based on the notion that ‘call’ and ‘put’ options have the same derivative in respect to volatility. Taking this as a starting point Newton-Raphson method was applied. Results were put in a text output file. We can see these results in the appendix following the main context.

Thus, in this section, a method for assessing these alternative measures of volatility and provide evidence on their behaviour and their predicting ability of forecasting future exchange rate volatility is presented. But the main issue is whether or not implied volatility measures are better than standard measures based on historical exchange rates.

At the end of our observations, relatively high values for implied volatility are obvious. This could happen due to the existence of illiquidity into the currency options market. This appears as the expiration date becomes closer and closer and investors hesitate to buy or sell call options. In order to compare implied and standard measures of volatility, we will use the RMSPE presented in the section of Methodology. RMSPE yields the results shown in table.

Currency Pair Modified RMSPE

Mark/Pound 0.004763

Dollar/Pound 0.005085

Yen/Pound 0.010288

Table 4: Root Mean Squared Prediction Error

RMSPE

Historical Volatility 0.006324

Implied Volatility 0.023969

(18)

These results show that historical volatility is better measure in predicting future exchange rate volatility than implied volatility. Although the difference is not important, it is sufficient to state that our data indicate the superiority of historical volatility. This conclusion is in line with the results of Jorion (1988) but they are contradicted to the view taken by Natenberg (1994).

However, it should not be claimed that the above results are certain. That is because in our analysis we did not use any weighting scheme to overcome the problem of ‘volatility smile’. Thus, implied volatilities may appear to be relatively high or low in respect to the true weighting implied volatility. In addition to this, put options are not used in obtaining implied volatilities.

All the above reasons question the superiority of historical volatility. So, further and more extended research on this subject should be done.

5.7. ‘Volatility trading’: a simple trading rule

All the above assessment becomes important if, and only if, can be proved profitable. Although many strategies have been exploited to profit from trading currency options, the more appropriate for our case is the so called ‘volatility trading’.

In this section we will device a trading rule to exploit our information concerning both types of volatility. According to that trading rule, we will sell or buy a ‘straddle’. This means that when we sell a ‘straddle’ we sell a put and a call currency option simultaneously, for the same currency amount, for the same strike price and expiration date. Similarly, when we buy a ‘straddle’ we buy a put and a call currency option simultaneously, for the same currency amount, for the same strike price and expiration date.

From previous analysis it can be seen that historical volatility is better forecast than implied volatility. If implied volatility is less than historical volatility, then we buy a ‘straddle’. The concept behind this strategy is that we buy two options with underestimated volatilities and, thus, cheaper that their true value. On the other hand, if implied volatility is more than historical volatility, then we sell a ‘straddle’. The concept is similar to the previous one. We sell two options with overestimated volatilities and, thus, more expensive than their true value. In the following application, we will construct seven different ‘straddles’ with six different expiration months.

Trading

Day Expiration Month Option Type Strike Price Volatility Implied Volatility Average Historical Volatility

29/12/1995 01/1996 Call 1.55 4.4514 29/12/1995 01/1996 Put 1.55 3.5210 3.9862 6.5092 03/01/1996 02/1996 Call 1.54 14.7027 03/01/1996 02/1996 Put 1.54 7.0646 10.8836 6.4535 12/01/1996 03/1996 Call 1.52 18.0595 12/01/1996 03/1996 Put 1.52 5.9648 12.0121 3.6884 18/01/1996 03/1996 Call 1.55 5.4483 18/01/1996 03/1996 Put 1.55 20.1774 12.8128 2.9865 24/01/1996 06/1996 Call 1.50 15.8065 24/01/1996 06/1996 Put 1.50 15.3264 15.5664 5.9858 26/01/1996 04/1996 Call 1.52 19.0548 26/01/1996 04/1996 Put 1.52 23.5741 21.3144 4.8629 01/05/1996 05/1996 Call 1.50 2.78006 01/05/1996 05/1996 Put 1.50 3.8473 3.3139 4.1382

(19)

Trading

Day Expiration Month Option Type Strike Price Premia Profitable Range Realized Profit (in cents) 29/12/1995 01/1996 Call 1.55 0.90 29/12/1995 01/1996 Put 1.55 0.74 <1.5426, 1.5590< 1.11 03/01/1996 02/1996 Call 1.54 2.18 03/01/1996 02/1996 Put 1.54 1.20 1.5280-1.5618 3.30 12/01/1996 03/1996 Call 1.52 3.25 12/01/1996 03/1996 Put 1.52 0.98 1.5102-1.5525 2.35 18/01/1996 03/1996 Call 1.55 0.85 18/01/1996 03/1996 Put 1.55 3.66 1.5134-1.5585 3.39 24/01/1996 06/1996 Call 1.50 4.05 24/01/1996 06/1996 Put 1.50 3.35 1.4665-1.5405 -1.11 26/01/1996 04/1996 Call 1.52 1.70 26/01/1996 04/1996 Put 1.52 3.60 1.4840-1.5370 4.11 01/05/1996 05/1996 Call 1.50 0.55 01/05/1996 05/1996 Put 1.50 0.74 <1.4926-1.5055< 0.01 Table 7: Realized Profits

We built two ‘straddles’ for the same expiration month. The software produces two values which correspond to implied volatility of the ‘straddle’: one for the call and one for the put option. We find the average of the two values and compare the value with the historical volatility of the specific trading day.

From the table the most appropriate trading strategies can be identified. The first and the last option pair (put and call) show that we should buy a straddle, while the other pairs require the construction of ‘top’ straddles. It should be pointed out that implied volatilities in the above table differ from implied volatilities derived in the previous section. This is an evidence which supports the view that an average of implied volatilities derived from many different options is a better approximation than a single volatility estimate.

Adding premia in our research, an attempt will be made to prove whether or not our trading rule is effective. So, in the following table we present the ranges in which our trading rule is profitable. Whether or not these strategies are indeed profitable will be clarified in the following analysis.

All payoffs in the above table are expressed in cents per transaction for every pound sold or bought. As it is demonstrated, six out of seven cases show the existence of profits. Five of these indicate considerable profits. This is consistent with the trading rule of ‘volatility trading’.

The only case in which loses were faced was the case with the longest interval up to the expiration date (approximately six months). This fact showed that it is preferable to trade straddles with short time to maturity. Over the long run, there is a great possibility for unexpected events to happen. Thus, only in the short ‘volatility trading’ may be proved an effective policy.

6. Conclusion

This paper examined the specific characteristics of foreign exchange rates. Having in mind these characteristics we tried to perform some efficient predictions. In order to do such a thing a review was made on the existing literature on this subject. The underlying

(20)

bibliography shows many alternative ways of estimating future exchange rate values. They are varied from “naive” (OLS regressions) to extremely sophisticated models (ARCH, GARCH, EGARCH, ARCH-M).

In our analysis, a wide range of those models were applied but we focused on the approaches of modelling exchange rate volatility that showed the most significant results. These approaches were ARCH(1), ARCH(2), ARCH(3), GARCH(l, l), historical volatility estimates and implied from option prices volatility estimates.

It is logical for the reader to wonder about the relationship between volatility and exchange rate forecasts. A quick answer would be that there is no relationship between them. However, decades of researchers have shown that as uncertainty increases, the prediction becomes more difficult.

In addition to this, an alternative view is presented on this subject. An accurate exchange rate volatility prediction not only assess the risk undertaken, but also increases the realised profit from trading foreign currency options. The idea behind this view is very simple. As predicted volatility increases, call and put currency option values increase, due to the positive relationship between volatility and option premium. Thus, for a given predicted foreign exchange rate value, our net profit is reduced as option premium increases. The opposite happens when the volatility decreases.

From the above, it is concluded that we should buy options when we have relatively low volatility and sell options when we have relatively high volatility. In this study, implied and historical volatility were compared. Our results are in contrast with the existing findings in the international literature. While the majority of academics and practitioners took the view that implied volatility is a better forecast for future volatility than historical volatility, it was found, that historical volatility is the best approximation.

One way to exploit the difference between historical and implied volatility is to buy options if implied volatility is lower than historical volatility and sell options if implied volatility is higher than historical volatility. But, this strategy of purchasing or selling a single call or put option could be proved too ‘dangerous’. By this term we mean that volatility might be moved adversely. For example, volatility may cause a downward pressure in foreign exchange rates, while an upward movement was expected. This adverse movement will cause a call option to become worthless.

In order to avoid the consequences from an adverse movement due to volatility, a combined option was constructed, known in the market as ‘straddle’. This strategy has been proved profitable in six out of seven cases examined in this paper.

There is also an associated problem with volatility: the change of volatility over time. In the above strategy it is assumed that volatility remains constant over the remaining time interval, up to the expiration date of the currency option. This assumption is feasible, if time to expiration is a small interval. So, time intervals equal to, or smaller than, six months were used. In this experimentation, the theoretical background was confirmed: short-term ‘volatility trading’ is profitable, while the long-term ‘volatility trading’ (six months) causes losses.

One of the main objectives of the study was to identify whether or not volatility changes over time. Thus, several approaches were tried for testing the existence of heteroscedasticity. All tests, except White's test, produced conflicting results. White's test was proved to yield homogenous results for all three currency pairs. These results agree with the vast majority of the literature that heteroscedasticity exists.

Once heteroscedasticity was identified, some alternative ARCH and GARCH models were applied. Although the variance function was according to the restrictions of the proposed

(21)

models, the ARMA stochastic equation implied some kind of divergent process. We stress that because the first lag coefficients were appeared to be dangerously close to unity. However, if this fact is ignored, ARCH and GARCH models describe foreign currency movements extremely well.

Specifically, modified Root Mean Squared Prediction Errors taken from Monte Carlo simulations show the predictive ability of ARCH models which outstrips the one performed by implied volatility. Moreover, modified RMSPE taken by dollar/pound ARCH Monte Carlo simulation seems to outperform the one taken by historical volatility estimates. Although historical volatility was found to ‘beat’ implied, it could not be urged that this result is certain. There are several reasons which motivate to stress exactly the opposite. First, we did not use the whole sample in our attempt to compare historical and implied volatility. Second, any weighting scheme was ignored in the calculation of implied volatility. Moreover, no allowances were made neither for the phenomenon of ‘volatility smile’ nor for options with different expiration dates.

In addition to this, we did not take into consideration the difference between volatilities implied in put and call options, respectively. Evidence, for significant difference, was found between call and put implied volatilities, as we investigated ‘volatility trading’. However, this is an area of extensive work for the future researcher.

Under the assumption of identical option prices, American options were used instead of European options (not in all cases) because we had to overcome European options illiquidity in the Philadelphia Stock Exchange.

As far as historical volatility is concerned, optimal predictions of volatility are more likely to give gradually decreasing weights to successively older items of data. On the other hand, small intervals were used for the estimation of historical volatility (less than two months). This means that volatility forecasts are updated rapidly and our approach can be consider efficient.

In order to test the relative ability of several models some innovations were used. Amongst them, a Turbo PASCAL 6.0 programme was constructed in order to derive implied volatilities. In an attempt to make all three currency pairs compatible, the idea of the modified RMSPE that uses as inputs percentage deviations from actual values was also introduced. To sum up, we introduced ourselves to the interesting world of foreign exchange rates and currency options and, consequently, we construct a device for ‘volatility trading’. Although we do not proceed in depth, we found interesting features of currency and currency option movements which can be re-examined and extended by another researcher.

References

Related documents

This experiment was conducted to evaluate the effect of extract of plants containing tannin on in vitro CH 4 production, fermentation characteristics and nutrient

• Normalization coefficients for partial waves are defined by matching with the wave function at large distances where the wave function depends on the details of the

For those new to the information security field, with previous applicable IT experience or otherwise, a number of more general information security certifications exist, such as

Like the extraordinarily organized way in which I had learned energy anatomy and was later led to read archetypal patterns, divine order makes itself known in all areas of our

Psychological First Aid Toolkit, WRMC 2015, Portland, OR The 5 Principles Calming Self/Collecti ve Efficacy Connection Hope Safety. This document may not

Other than being main producers, Indonesia and Malaysia are the two major exporters of palm oil in the world market with total shares of more than 80 percent of the total

The scope of the study is shaded in grey and in green. In PNG there are separate but overlapping qualification frameworks for higher education and for TVET. Qualifications at

Load flow input includes bus demand powers, bus voltages, reactive power generation limits of generators, demand powers, bus voltages, reactive power generation limits