In contrast to previous approaches, we model the whole impliedvolatility surface in order to price options simultaneously via BS formula as mapping from option prices to IV. Gon¸calves and Guidolin (2005) combine a cross-sectional approach similar to that of Dumas, Fleming, and Whaley (1998) with vector autoregressive models, but with exactly this idea in mind. The findings are mixed, and provide only a good in-sample fit, but questionable out-of-sample performance. Even the more complex and computational demanding dynamic semi-parametric factor model (DSFM) of Fengler, H¨ardle, and Mammen (2007) has quite limited predicting power. In a comparison of the one-day prediction error, the DSFM performs 10 percent better than a simple sticky-moneyness model, where IV is taken to be constant over time at a fixed moneyness.
the forecasting diagnostics for market risk measurement, such as the tests by Kupiec (1995) and Christoffersen (1998), the multinomial test of VaR violations by Kratz et al. (2018), the asymmetric quantile loss (QL) function proposed by Gonzalez-Rivera et al. (2004), and the Model Confidence Set by Hansen et al. (2011). The first contribution of this paper is an evaluation of the contribution of both online search queries and options-based impliedvolatility to the modelling of the volatility of the Russian RTS index future, and how this dependence has changed over almost two decades (from 2006 till 2019). To our knowledge, this analysis has not been done elsewhere. The second contribution is an out-of-sample forecasting exercise of the Value- at-Risk for the RTS index future at multiple confidence levels using several alternative models’ specifications, with and without Google data and impliedvolatility. The third contribution of the paper is a robustness check to measure the accuracy of Value-at-Risk forecasts obtained with a multivariate model.
Despite the increasing popularity of the V IX, measurement errors in its construction have been noted by Jiang & Tian (2005). The common problem inherent in the computation of the V IX as well as other measures of model-free impliedvolatility is that only a discrete set of strikes is actually traded in the market and that very low and high strikes are usually absent. To account for measurement errors induced by the limited number of strikes, Jiang & Tian (2005) apply the cubic spline method to interpolate between existing strikes and exploit a ‡at extrapolation scheme to infer option prices beyond the truncation point. Andersen & Bondarenko (2007) address the issue induced by the discrete set of strikes via the positive convolution approximation method proposed by Bondarenko (2003). Although interpolation and extrapolation techniques are widely accepted, it remains unclear how such techniques a¤ect the performance of implied volatilities in predicting future returns and realized volatility. In addition, there appears to be no consensus on the roles played by the OTM call and put options in the forecast of future volatility and returns. Jackwerth (2000), Jones (2006) and Bates (2008) suggest that the OTM put options may be irrelevant to known risk factors a¤ecting stock returns. Using a cubic spline interpolation and ‡at extrapolation methods, Dotsis & Vlastakis (2016) also …nd that the OTM put options, especially deep OTM puts, do not contain important information with respect to equity volatility risk. They also show that the OTM call options subsume all useful information embedded in the OTM puts for forecasting future realized volatility. However, Andersen et al. (2015) show that the left tail risk, driving a substantial part of the OTM put option dynamics, exhibits strong predictive power for future excess market returns over long horizons.
robustness of the results and stability of coefficients over time. The time periods are listed in Table 2. Period 1 covers the entire in-sample. The dates chosen for Period 2 reflect the starting point of more volatile behavior in the VIX. Periods 3 and 4 were chosen based on the number of observations. Studies such as Noh et al. (1994) and Blair et al. (2001) use 1,000 observations when calculating forecasts from GARCH models. Forecasts were also calculated with only 500 observations to see whether forecast performance would improve with only a short period of observations: conditions in financial markets can change rapidly, and perhaps only the most recent information is relevant for forecasting purposes. The in-sample estimation period ends for all time periods on 31.12.2002, and the first forecasts are calculated for 2.1.2003.
SV models can be estimated by quasi maximum likelihood methods but the main emphasis will be on methods for exact maximum likelihood using Monte Carlo importance sampling methods. The performance of the models is evaluated, both within sample and out-of-sample, for daily returns on the Standard & Poor's 100 index. Similar studies have been undertaken with GARCH models where ndings were initially mixed but recent research has indicated that impliedvolatility provides superior forecasts. We nd that impliedvolatility outperforms historical returns in-sample but that the latter contains incremental information in the form of stochastic shocks incorporated in the SVX models. The out-of-sample volatility forecasts are evaluated against daily squared returns and intradaily squared returns for forecasting horizons ranging from 1 to 10 days. For the daily squared returns we obtain mixed results, but when we use intradaily squared returns as a measure of realised volatility we nd that the SVX + model produces the most accurate
variables.  evaluated the role of the online search activity for forecasting realized volatility of financial markets and commodity markets using models that also include market-based variables. They found that Google search data play a minor role in pre- dicting the realized volatility once impliedvolatility is included in the set of regressors. Therefore, they suggested that there might exist a common component between impliedvolatility and Internet search activity: in this regard, they found that most of the predictive information about realized volatility contained in Google Trends data is also included in impliedvolatility, whereas impliedvolatility has additional predictive content that is not captured by Google data.
Areas for further research could involve the use of alternative models such as the AP-GARCH specification. Another possibility would be to move away from univariate models to the use of multivariate GARCH models, see for example, Silvennoinen (2008) incorporating independent macroeconomic variables, such as interest rates, fiscal indicators, current account balances, money supplies and government expenditure. Another approach could be that taken by Bildirici, and Ersin (2011) who suggest supplementing GARCH models with the use of neural networks to improve their forecasting ability.
The relationship between volatility smiles and market liquidity is also addressed in Grossman and Zhou (1996), Platen and Schweizer (1998), and Frey and Patie (2002). In particular, these studies suggest that volatility smiles are related to feedback ef- fects from dynamic hedging strategies. In an equilibrium framework, Grossman and Zhou (1996) assume that the demand for portfolio insurance is exogenously driven by a group of investors. In the context of their model, they explain why OTM put options exhibit higher implied volatilities than ITM puts. They conclude that volatility smiles reflect the equilibrium price impact of portfolio insurance. Platen and Schweizer (1998) develop a model in which the stock price incorporates the (demand) effect of hedging strategies into account. By using this model they pro- vide numerical evidence that implied volatilities are due to feedback effects from hedging strategies. Frey and Patie (2002) criticise the approach of Platen and Schweizer (1998), as the latter employ an implausible model parameterisation to explain volatility smiles. However, in a related model, Frey and Patie (2002) pro- pose that the volatility smile pattern is induced by market illiquidity. In particular, based on their model, they demonstrate that the lack of market liquidity due to a large market downturn leads to volatility skews. The assumption that large down- ward or upward asset price movements reduce the level of market liquidity is quite plausible. 172 In the next Section, the effect of information asymmetry on implied
allow to obtain an analytical closed-form approximation for European option prices under the GARCH diffusion model. This approximation can be easily implemented in any software pack- age (such as Excel spread sheets). Then, just plugging in the model parameters, it provides option prices without any computational efforts. As we will show by Monte Carlo simulations, this approximation is very accurate across different strikes and maturities for a large set of rea- sonable parameters. Secondly, we propose an analytical approximation for implied volatilities based on the conditional moments of the integrated variance, which allows us to easily study volatilitysurfaces induced by GARCH diffusion models. Thirdly, the conditional moments of the integrated variance implied by the GARCH diffusion process generalize the conditional moments derived by Hull and White (1987) for log-normal variance processes. Finally, the conditional moments of the integrated variance can be used to estimate the continuous time parameters of the GARCH diffusion model using high frequency data 6 . As already mentioned, Nelson’s theory suggest an appealing estimation procedure for the GARCH diffusion model parameters. By Monte Carlo simulation we investigate the accuracy of such inference results.
variance remains positive, ! > 0, > 0, > 0. We use a t-distribution for the error process. The GARCH(1,1) model is estimated applying the standard procedure, as explained in Bollerslev (1986) and Taylor (1986), but using rolling windows. Two rolling windows of …xed size were used. One contains 756 observations (approximately three years of data), and the other one contains 1526 observations (approximately six years of data). The …xed size window was chosen considering that six years of daily data is enough to capture the volatility dynamics of the series having robust coe¢ cients with relatively low standard er- rors. 9 Recursive estimation was not used because the conditional predictive ability test to be applied later can not be used when the forecasts are obtained using expanding windows (Gi- acomini and White (2006) rule out expanding window forecasting schemes by assumption). The parameters, including the degrees of freedom of the t-distribution, were estimated using maximum likelihood applying the Marquandt procedure. 10
To investigate in more detail the eect of the prior on the interpolation between traded strikes, we considered a hypothetical market with three traded options, expiring in 30 days, with strikes equal to 100, 95 and 105 percent of the spot price. We assumed that the implied volatilities of the options were 14%, 15% and 16%, respectively and that = r = 0. We calibrated four dierent volatilitysurfaces for this dataset, using priors of 11%, 13%, 14% and 17%. The results are displayed in Figure 4. These calculations conrms our previous conclusions on the sensitivity of the impliedvolatility curve to the prior.
In addition to representing the credit market, rather than the equity market, the credit default swap (CDS) market has certain features that make it particularly useful for the purpose of forecasting forward volatilities. First, the maturities of credit default swaps are much longer than those of ordinary equity options. This makes it possible to forecast stock volatilities starting several years into the future. Second, the availability of constant-maturity CDS contracts with a range of maturities, n = 1, 2, 3, … , 10 years make it possible to back out forward volatilities for any calendar year 1 to 10 years into the future. This feature sets this study/market apart from previous studies/markets relying on both fewer and/or changing maturities. Third, the CDS market has become a mature market covering many regions, countries and ﬁrms ( Byström, 2015 ). This makes the CDS market a promising new candidate for anyone who wants to estimate implied forward stock volatilities.
The relationship has been an important research topic and many studies have been devoted to this topic. The first study is done by Latane and Rendleman (1976). They use closing prices of options and stocks for 24 companies whose options are traded on the Chicago Board Options Exchange (CBOE) and conclude that impliedvolatility outperforms historical volatility in forecasting future realized volatility. After that, Chiras and Manaster (1978) and Beckers (1981) also obtain the same conclusion based on a broader sample of CBOE stock options. It should be noted that these studies concentrate on static cross-sectional rather than time-series forecasts.
I shall aim higher than essential uncertainty because essential uncertainty, as irrevocable as it may sound, in fact still keeps for the “future” (and for its unfolding through the staging of a time series) the same meaning and the same expectations as does the econometric vein it is criticizing. It differs only in filling up its repre- sentation with the impossibility of the forecast and the incommensurability of the econometrics. (This is what I meant when I said that it was the same conceptual switch, except that it is now set on the “off” position.) What I have in mind, by con- trast, is a generalization of the notion of forecast that will surpass both the view that we may have a forecast today and the view that we may never have one. I will propose that a forecast is not just something that we can have or not have and carry with us as we march into the future, but that it is, generally, anything we could do today about the future. (So to trade is essentially to forecast.) Instead of lamenting the fact that we are handed over, with the market, a very strange beast indeed that fits none of our forecasting schemes and makes a joke of our econometric paradigm, and instead of wondering what sense to then make of the concept of “impliedvolatility” (a con- cept that seems to want to say something about the future variance of the logarithm of returns – more truthfully so, for that matter, than all the
this information in a way that highlights the economic value of adopting this forecasting approach. To this end, we implement a stylised options trading strategy experiment designed to exploit the volatility predictions of the FTS model and we benchmark the performance against the CT11, GG06 and AR models. Following Bernales and Guidolin (2014), we utilise straddle trading strategies for our analysis as this gives exposure to movements in volatility while protecting against movements in the underlying FX rates. We proceed as follows. For a given currency pair and given maturity, we use the day-ahead prediction of the ATM volatility change under the FTS model as a signal to either buy or sell an ATM-straddle position of corresponding maturity. If the forecast is for volatility to increase then we go long the ATM-straddle and if the forecast is for volatility to decrease then we go short the ATM-straddle. As we do this for each of the 1, 3, 6 and 9 month maturities, leading to a portfolio of straddle positions for which we record the net daily return. The market is assumed frictionless with no transactions costs. The option pricing model of Garman and Kohlhagen (1983) is used to convert impliedvolatility quotes to prices, using the appropriate Euribor, USD Libor, GBP Libor and JPY Libor rates as required. We replicate this for each of the CT11, GG06 and AR models. Consistent with our earlier analysis, we consider 1000, 500, 200 and 100 day out-of-sample periods to assess the trading performance.
Regarding the predictability of IV itself, we run the CPA test for the squared forecast error. Table 4 reports the p-values for testing the null hypothesis of equal predictive ability between the row and column models.Rejection of the null hypothesis is indicated by the superscripts + and −. A positive (negative) sign indicates that the row (column) model is outperformed by the column (row) model. There are 9 cases (out of 18) in which we reject the null hypothesis that the random walk and the ARMA-type models perform equally well. In these cases, the + sign denotes the ARMA models performs better than the random walk. This suggests that there is a predictable pattern in the dynamics of impliedvolatility indices, which is in line with the results of Konstantinidi et al. (2008). When the predictive ability of the random walk is tested against the asymmetric ARMA models, the latter performs significantly better than the random walk at the 1% level. The importance of asymmetry found in-sample carries over to the out-of-sample analysis. When the model under consideration is an ARMA model that takes into account the contemporaneous asymmetric effect - ARMAX, ARIMAX, ARFIMAX models - always outperforms not only the random walk, but also the symmetric ARMA models. In these cases, the null hypothesis of equal predictive ability is always rejected at the 1% level.
Super-sticky-strike moves are where we see both movements along the curve as well as a shift in the surface in the same direction. In other words, when the underlying is sold off, the impliedvolatility goes up by more than the skew would suggest (and the reverse on a rally) - moves in volatility therefore are exaggerated. Quick hint: On our daily comments, whenever you see both the “Strike-by-strike” as well as the “Implied by skew” figures green for a particular move (or both red), that is a super-sticky strike move.
market have led to the development of a considerable literature on alternative option pricing models, in which the dynamics of the underlying asset is considered to be a nonlinear diffusion, a jump-diffusion process or a latent volatility model. These models attempt to explain the various empirical deviations from the Black–Scholes model by introducing additional degrees of freedom in the model such as a local volatility function, a stochastic diffusion coefficient, jump intensities, jump amplitudes etc. However, these additional parameters describe the infinitesimal stochastic evolution of the underlying asset while the market usually quotes options directly in terms of their market-implied volatilities which are global quantities. In order to see whether the model reproduces empirical observations, one has to relate these two representations: the infinitesimal description via a stochastic differential equation on one hand, and the market description via implied volatilities on the other hand.
To summarise, in terms of both the hit percentage and DQ test, the results of the combining methods were better than the two individual methods. The PlugIn method was better than the two individual methods in terms of the hit percentage, but less convincing in terms of the DQ test. Among the combining methods, our study suggests that the LinearComb and SimpAvg methods are the best. As for the reason why the LinearComb method performs better than the WtdAvg method, we would suggest that it may be because the inclusion of the intercept term in the model, and the lack of restrictions on the values of the weights, allows the individual quantile estimators to be debiased. This has certainly been the main argument in favour of the LinearComb method in the context of forecasting the level (or mean) of a time series (see Granger and Ramanathan, 1984). It is worth noting that the WtdAvgExp offered slight improvement over the simpler WtdAvg method in terms of the hit percentage, suggesting that there may be benefit in trying to optimise the combining weight by giving more weight to the more recent observations.