Despite these appealing features, we face practical difficulties when dealing with the ML estimation of **GARCH**-**type** **models**. First, the maximization of the likelihood function must be achieved via a constrained optimization technique since some (or all) model parameters must be positive to ensure a positive conditional variance. It is also common to require that the covariance stationarity condition holds. This leads to complicated non-linear inequality constraints which render the optimization proce- dure cumbersome. Moreover, the convergence of the optimization is hard to achieve if the true parameter values are close to the boundary of the parameter space and if the **GARCH** process is nearly non-stationary. Optimization results are often sensitive to the choice of starting values. Second, in standard applications of **GARCH** **models**, the interest usually does not center directly on the model parameters but on possi- bly complicated nonlinear functions of the parameters. For instance, a trader might be interested in the unconditional variance implied by a **GARCH** model, which is a (highly) non-linear function of the model parameters. In order to assess the uncer- tainty of such a quantity, classical inference involves tedious delta methods, simulation from the asymptotic Gaussian approximation of the parameter estimates or the time- consuming bootstrap methodology. Third, the conditions for the optimal asymptotic properties of ML estimators to hold are fairly difficult to prove, while often assumed to hold in practice. Moreover, since **GARCH**-**type** **models** are highly non-linear, the asymptotic argument would require a very large number of data to hold. This is ob- viously not always the case in practice. Finally, in the case of **GARCH** with mixture disturbances or regime-switching **GARCH** **models**, testing for the number of mixture components or the number of regimes is not possible within the classical framework due to the violation of regularity conditions (see Fr¨ uhwirth-Schnatter, 2006, Section 4.4).

23 Read more

As Engle and Patton (2000) mention, volatility **models** must have the ability to forecast future volatility. At the same time, volatility **models** must have the capacity to capture stylized factors of volatility series. To capture the stylized factors, theoretical researchers are engaged in developing different types of time varying volatility **models** immediately after the development of Engle(1982)’s ARCH model. To capture different stylized factors exhibited by the volatility series, ARCH **type** **models** have been extended in various dimensions including Bollerslev (1986)’s **GARCH**, to capture the important nonlinearily, asymmetry, and long memory properties in the volatility process (see, e.g., Andersen and Bollerslev, 1998). Other popular extensions of **GARCH** **type** **models** to improve the flexibility of the basic ARCH model are EGARCH (Nelson, 1991) model, GJR-**GARCH** (Glosten, Jaganathan, and Runkle, 1993), AGARCH (Engle, 1990), APARCH (Ding et al., 1993), TGARCH (Zakoian, 1994) and QGARCH (Sentana, 1995). However, there are few other **models** including MCMC (see, e.g., Verhofen, 2005 ;Chib, Nardari and Shephard, 2002), support vector machine (see, e.g., Khan, 2011) are also available in literature to capture such stylized factors. We confined our study on **GARCH** **type** **models** only.

10 Read more

We compare more than 1000 **GARCH** **type** **models** in terms of their ability to fit to the historical data and to forecast the conditional variance in an out- of-sample setting. The main findings are that even though widely used **GARCH**(1,1) model performs well, it is still outperformed by more sophisticated **models** that allow for leverage effect. The loss functions select the “zero-mean, generalized error distribution, moving average term with one lag” specification, and the information criteria select “constant-mean, t- distribution” specification as the best **GARCH**(1,1) specification. Overall, the t-distribution seems to characterize the distribution of the returns better than the Gaussian distribution or the generalized error distribution. There are no significant differences between the three specifications for the expected value of returns. **Models** that allow for leverage effects are slightly superior to **models** that do not. In terms of forecasting performance, the best **models** are the ones that can accommodate a leverage effect.

30 Read more

Table 4 shows that all values for **GARCH**-CJ-**type** **models** are smaller than that of both **GARCH**- RV and **GARCH** **type** **models** consecutively. This leads us to conclude that in in-sample volatility forecasting, the **GARCH**-CJ-**type** **models** perform better than their counterparts and have more pre- dictive power. However, when comparing forecasting power of volatility **models** given normal and student-t distribution for residuals, the findings are mixed and inconclusive regarding which error dis- tribution assumption contributes better to boost the predictive power of the **models**. See that for the same given model of the six competing **models**, the four measures when assuming normal distribution for errors are not all smaller (alternatively, higher) than those for a student-t assumption for errors distribution, and judging the predictive power of **models** relies on which measure is used to for the comparison.

13 Read more

Hamilton and Susmel [6] stated that the spurious high persistence problem in **GARCH** **type** **models** can be solved by combining the Markov Regime Switching (MRS) model with ARCH **models** (SWARCH). The idea behind regime switching **models** is that as market conditions change, the factors that influence volatility also change. Nowaday some researchers have development of **GARCH** model, as well as the benefit of using **GARCH** model [1, 7-9].

11 Read more

The **GARCH**-in-mean process is an important extension of the standard **GARCH** (generalized autoregressive conditional heteroscedastic) process and it has wide applications in economics and finance. The parameter estimation of **GARCH** **type** **models** usually involves the quasi-maximum likelihood (QML) technique as it produces consistent and asymptotically Gaussian distributed estimators under certain regularity conditions. For a pure **GARCH** model, such conditions were already found with asymptotic properties of its QML estimator well understood. However, when it comes to **GARCH**-in-mean **models** those properties are still largely unknown. The focus of this work is to establish a set of conditions un- der which the QML estimator of **GARCH**-in-mean **models** will have the desired asymptotic properties. Some general Markov model tools are applied to derive the result.

152 Read more

All the **GARCH** **type** **models** that are being analyzed and presented so far in the paper refer to single-regime **models**. However, under the effect of extreme economic and geopolitical events there is high chance that the market of more volatile assets, such as that of energy commodities, will behave in a completely changed way than it would under normal conditions. As, a result researchers trying to capture this change in the market behavior of energy products, adopted the Markov-Switching approach as first presented by Hamilton (1994) and combined it with the traditional **GARCH** model of Bollerslev (1986), developing a new hybrid model that has all the advantage of **GARCH** **type** **models**, while further allowing for a probable regime switch in the estimated parameters of the variance process. Hence, the basic difference between the single-regime **GARCH** **models** and the multiple-regime Markov-Switching **GARCH** **models** (MS-**GARCH**), is that the latter incorporate a regime-switching variable which is allowed to switch over different regimes following the Markov process.

50 Read more

To estimate VaR accurately, it is essential to process accurate volatility estimates. In this context, we develop different ways to estimate volatility. When we find **models** to fit REIT return, we need to take the volatility clustering phenomenon in account. Bollerslevn (1986) [1] generalized ARCH model to **GARCH** model, which is able to capture the time-varying volatility. This **GARCH** model uses a linear function of the squared historical innovations to approximate conditional variance. But we cannot forget to mention drawbacks of this model, since it overlooks the leverage effect in REIT return’s volatility. The EGARCH, **GARCH**-GJR and APARCH **models** are applied here to show the conditional asymmetry properties. In this paper, we are focusing upon the use of these **GARCH**-**type** **models** to estimate and forecast daily VaR of the Real Estate Investment Trust (REIT) stock in fixed period time.

10 Read more

The paper introduces a new simple semiparametric estimator of the conditional variance-covari- ance and correlation matrix (SP-DCC). While sharing a similar sequential approach to existing dy- namic conditional correlation (DCC) methods, SP-DCC has the advantage of not requiring the di- rect parameterization of the conditional covariance or correlation processes, therefore also avoiding any assumption on their long-run target. In the proposed framework, conditional va- riances are estimated by univariate **GARCH** **models**, for actual and suitably transformed series, in the first step; the latter are then nonlinearly combined in the second step, according to basic properties of the covariance and correlation operator, to yield nonparametric estimates of the various conditional covariances and correlations. Moreover, in contrast to available DCC methods, SP-DCC allows for straightforward estimation also for the non-symultaneous case, i.e. for the esti- mation of conditional cross-covariances and correlations, displaced at any time horizon of interest. A simple ex-post procedure to ensure well behaved conditional variance-covariance and correla- tion matrices, grounded on nonlinear shrinkage, is finally proposed. Due to its sequential imple- mentation and scant computational burden, SP-DCC is very simple to apply and suitable for the modeling of vast sets of conditionally heteroskedastic time series.

Abstract— This paper studies a total of twelve different specifications of **GARCH** families of **models** like **GARCH**; EGARCH and GJR-**GARCH** **models** which are modeled to study the depth and incidence of volatility and also for forecasting. Nifty return series data has been tested to find out the appropriate volatility model and forecasting efficiency. Four measures have been used to estimate the forecasting error across all the twelve **models**. For volatility estimate GJR is found to be the best model for nifty series. MSE is found to be the best performer while calculating forecasting errors among the four forecasting measures considered.

The theory of option pricing is an important topic in the financial literature. The seminal works of Black and Scholes and Merton were the starting point for European option pricing. Follow- ing the finding that these model prices systematically di ﬀ er from market prices, the literature on option valuation has formulated a number of theoretical **models** designed to capture these empirical biases. Many empirical studies on asset price dynamics have demonstrated that char- acteristics such as time-varying volatility, volatility clustering, non-normality, and leverage e ﬀ ect etc. should be taken into account when modelling financial data. Therefore various mod- els and techniques were developed in both discrete and continuous time to incorporate some or all of the above properties.

78 Read more

Chaker and Mabrouk (2011) estimated VaR by ARCH and **GARCH** **type** **models** such as FIGARCH, FIAPARCH, and HYGARCH. These **models** were estimated based on normal, student-t and skewed t-student distributions. Results show that by considering features of financial time series data such as long memory, fat tail and asymmetrical performance, daily VaR predictions would be more accurate. Also they indicate FIGARCH has better performance compared to other **models**. P.T. Wu and Shieh (2007), and T.L. Tang (2006) are also investigated Value-at-risk analysis for long-term interest rate futures and long memory in stock index future markets.

22 Read more

Table A.2 (Appendices) shows the estimation results of the **GARCH**-**type** **models**. It can be noticed that the log-likelihood value is maximised under the AR(1)-CGARCH(1,1) model. Interestingly, all the three information criteria also select the AR(1)-CGARCH(1,1) model. Moreover, all the parameter estimates are statistically significant for the AR(1)- CGARCH(1,1) model, while the results of the ARCH(5) and 2 ( 10 )

Over a long period, machine learning techniques have been used for financial time series analysis and predictions. One of the most complex machine learning employed is Artificial Neural Networks (ANN) [17]. ANN have been used as the most popular machine learning techniques since early 1990’s. The standard statistical **models** have been used in the field of financial time series analysis and prediction, but artificial neural network have gained popularity. Schoeneburg made an investigation about the possibility of price prediction by different algorithms of neural [18]. The results indicated that artificial neural network model can be successfully applied in making the predictions of data in the field of financial time series. Combined **models**, artificial neural network were combined with asymmetric model for a better performance in stock price returns volatility.

Chapter 2 reviews crude oil prices in forecasting. First, crude oil prices will be reviewed. Then, the volatility in crude oil prices will be discussed. The discussions start on the past researchers’ work in Box-Jenkins methodology and **GARCH**-**type** **models** are also presented. Finally, conditional heteroscedasticity are explained.

25 Read more

Several studies have been conducted to estimate the risk by various VaR methodologies. Amongst few early studies are Allen (1994) evaluated performance of traditional VaR methods, historical simulation (HS) and variance-covariance. Zangari (1996) has investigated the VaR **models** under non- normality assumption. Jamshidian and Zhu (1996, 1997) studied the efficiency of Monte Carlo methods in comparison with variance-covariance approach. However, all these methods are based on the assumption of constant volatility i.e. homoscedasticity.

Forecast performance of fitted **GARCH** (1,1) and BL- **GARCH** (1,1) **models** for the inflation rates has investigated shows that out-of-sample forecast performance for classical **GARCH** model for the inflation rate series under consideration failed to produced good forecast, whereas the BL-**GARCH** performed very well as showing in the following tables (9) and (10) below:

There have been a number of extreme value studies in the finance literature in recent years. De Haan, Jansen, Koedijk and de Vries (1994) study the quantile estimation using extreme value theory. Mc Neil (1998) study the estimation of the tails of loss severity distributions and the estimation of the quantile risk measures for financial data using extreme value theory. Embrechts et al. (1998) overview the extreme value theory as a risk management tool. Muller et al. (1998) and Pictet et al. (1998) study the probability of exceedences and compare them with **GARCH** **models** for the foreign exchange rates. Mc Neil (1999) provides an extensive overview of the extreme value theory for risk managers. One year after, Mc Neil and Frey develop a new approach in two steps that permits to estimate the tail- related risk measures for heteroskedastic financial time series, a such method is a combination between **GARCH** **models** and EVT method. Some applications of EVT to finance and insurance can be found in Embrechts, Klueppeelberg and Mikosch (1997) and Reiss and Thomas (1997).

31 Read more

Extreme returns in stock returns need to be captured for a successful risk management function to estimate unexpected loss in portfolio. Traditional value-at-risk **models** based on parametric **models** are not able to capture the extremes in emerging markets where high volatility and nonlinear behaviors in returns are observed. The Extreme Value Theory (EVT) with conditional quantile proposed by McNeil and Frey (2000) is based on the central limit theorem applied to the extremes rater than mean of the return distribution. It limits the distribution of extreme returns always has the same form without relying on the distribution of the parent variable. This paper uses 8 filtered EVT **models** created with conditional quantile to estimate value-at-risk for the Istanbul Stock Exchange (ISE). The performances of the filtered expected shortfall **models** are compared to those of **GARCH**, **GARCH** with student-t distribution, **GARCH** with skewed student-t distribution and FIGARCH by using alternative back-testing algorithms, namely, Kupiec test (1995), Christoffersen test (1998), Lopez test (1999), RMSE (70 days) h-step ahead forecasting RMSE (70 days), number of exception and h-step ahead number of exception. The test results show that the filtered expected shortfall has better performance on capturing fat-tails in the stock returns than parametric value-at-risk **models** do. Besides increase in conditional quantile decreases h-step ahead number of exceptions and this shows that filtered expected shortfall with higher conditional quantile such as 40 days should be used for forward looking forecasting.

12 Read more

Given the ability of **GARCH** to capture volatility, how far can **GARCH** **models** and its extension, namely Exponential **GARCH** (EGARCH), Threshold **GARCH** (TGARCH), Power **GARCH** (PGARCH) and **GARCH**-in-Mean (**GARCH**- M), be used to observe the time series data where periods of volatility clustering are highly persistent? What are the differences between symmetric and asymmetric **GARCH** **models**?

28 Read more