This paper examines the predictive capacity of two Grey Systems Forecastingmodels. The original Grey GM(1,1) Forecasting model, introduced by Deng   together with an improved Grey GM(1,1) model proposed by Ji et al .  are used to forecast medical tourism demand for Bermuda. The paper also introduces a quasi-optimization method for the optimization of the alpha (weight) parameter. Five steps ahead out-of-sample forecasts are produced after estimating the models using four data points. The results indicate that the optimization of the alpha parameter substantially improves the predictive accuracy of the models; reducing the five steps ahead out-of-sample Mean Absolute Percentage Error from roughly 7% to roughly 3.80% across the two models. Largely, the forecasting approaches demonstrate significant potential for use as an alternative to the traditional forecasting methods in circum- stances where substantial amounts of high-quality data are not available.
he HIV –forecasting disease modeling is that it leads to clear statements of the assumptions about the biological and social mechanisms, which influence disease spread and dynamism. The model formulation process is more valuable for statistician, Epidemiologists, Mathematician‟s and modelers, because it forces them to be precise about the relevant aspects of diseases transmission, course of infectivity, recovery, treatment prognosis and renewal of susceptibility. .The Statisticians need to formulate the models clearly and precisely using different clinical and biological parameters, which has been very well-understood in connection to dynamics of HIV –diseases, such as increase in CD4 count after inception of HAART, spectrum of HIV TB co infection, HCV HBV co infection etc., The Complete statements of the assumptions have crucial role, so that the reasonableness of the model can be interpreted by the relevant conclusions. The matrix of the Forecasting model may help physician, policymakers, young researcher and innovators. Limited number of HIV-forecasting model‟s study has been documented in India .In this context the Present study aims to formulate the different forecastingmodels, assessing quantitative conjectures and identify the trend of CD4 count before initiation of HAART therapy.
Evaluation of predictions is an important step in any forecasting process. For point estimates this is a straightforward process that typically involves determining the Euclid- ean distance between the predicted and observed points. There is a vast literature on evaluation metrics for point forecastingmodels, for a review of the most popular meth- ods see . However, there are conspicuously less papers available that describe meth- ods for evaluating density forecastingmodels. In fact, one must turn to the meteorolog- ical and financial literature to find any papers that focus on the evaluation of density forecasts with any degree of rigour. This is in spite of density forecast evaluation being a considerably more complex problem than point estimation. Diebold et al.  suggest that there might be three reasons for this neglect.
Judgmental model selection is used in practice because it has some endearing properties. It is intuitive: A problem that necessitates human intervention is always more meaningful and intellectually and in- tuitively appealing for users. It is interpretable: Practitioners under- stand how this process works. The version of model-build that is based on judgmental decomposition is easy to explain and adapt to real-life setups. This simplicity is a welcome property (Zellner et al., 2002). In fact, the con ﬁ guration used in our experiment is already o ﬀ ered in a similar format in popular software packages. For example, SAP APO-DP provides a manual (judgmental) forecasting model selection process, providing clear guidance of a judgmental selection to be driven from prevailing components (most notably trend and seasonality) as per- ceived by the user (manager). Specialized oﬀ-the-shelf forecasting support systems like Forecast Pro also allow their optimal algorithmic selection to be overridden by the user.
A variety of studies examine international tourist flows to various Caribbean countries. Many of these studies utilize a structural econometric approach for analyzing tourism demand, but do not, generally, employ them for out of sample simulation exercises (Clarke, 1978; Carey, 1991; Metzgen-Quemarez, 1990; Vanegas and Croes, 2000; Vanegas and Croes, 2005, Yoon and Shafer, 1996). Studies which develop forecastingmodels primarily rely on structural time series models (Greenidge, 2000); univariate and transfer function autoregressive integrated moving average (ARIMA) models; and autoregressive (AR) models (Dharmaratne, 1995; Dalrymple and Greenidge, 1999). A growing number of studies employ error correction models (ECMs) to analyze tourism demand in different markets around the world (Song, Witt, and Jensen, 2003; Ouerfelli, 2008). Comparatively few of the Caribbean studies to date, however, have tried to utilize ECMs for forecasting tourist arrivals (Croes and Rivera, 2010).
Prediction of non-renewable natural commodity has experienced major changes for past decades. Starting from conventional statistical techniques to artificial intelligence approach, this issue has never failed to attract both academic and practitioners community. In this study, a new SI algorithm namely Grey Wolf Optimizer is employed for short term crude oil and gasoline price forecasting. The efficiency of the developed GWO forecastingmodels is measured based on Mean Absolute Percentage Error and prediction accuracy and is compared against the ones produced by Artificial Bee Colony and Differential Equation models. Findings of the study reveal competitive results where it is learned that the GWO is comparable to ABC algorithm in predicting gold price while becoming a better predictor for gasoline price. Such forecasting model would benefit the investors in planning their investment in energy commodity. As in future, it would be interesting to test the
Forecasting techniques that are based on regression analysis are substantially different in their underlying concepts and theory from the techniques of time series analysis, smoothing and decomposition. Regression techniques are generally referred to as causal or explanatory approaches to forecasting. They attempt to predict the future by discovering and measuring the effect of important independent variables on the dependent variable to be forecast. Because of their costs, these methods are generally used in long-run planning and in situations where the value of increased accuracy warrants the additional expense. In this paper we discuss about various basic forecastingmodels such as Naive, Moving averages, Simple smoothing, Double moving averages and Double smoothing, triple smoothing and adaptive smoothing forecastingmodels.
Another branch of time series models are the unobserved components (UC) models . According to these models, an observable economic series can be expressed as consisting of unobservable components. The observable series is linked to the unobservable components via the measurement equation. The unobservable component's dynamics is explained by the transition equation , by some other variables, or by its past developments. For example, the GDP series can be expressed as a sum of the trend, i.e. potential GDP, and the cycle, i.e. output gap, which are both unobservable, by the measurement equation . In the transition equation the trend and the cycle can be then expressed as some time series models (for instance, random walk with drift for the trend, and autoregressive process for the cycle). This way of expressing a time series is called state space representation . UC models, written in state space form, can be estimated by the Kalman filter, which is an iterative algorithm that can be used for many purposes, including estimation. For more on UC models, see Harvey, 2006. UC models can be both univariate and multivariate. In the univariate UC model, a series depends only on its past values. The multivariate UC models, on the other hand, can incorporate economic theory, as well, and in these models the dynamics of a series is not completely explained by its past developments, but by other variables, as well.
We have solved the open problem that in INAR(p) models, both the likelihood function and multi-step probabilistic forecasts “seem” to be computationally intractable. Our method is based on i) the simple relationship between the p.g.f. and p.m.f. for a count distribution; ii) the CaR property of the INAR(p) process. Our method eliminates the estimation bias due to the saddlepoint approximation of Pedeli et al. (2015), as well as the approximation (resp. simulation) error of McCabe et al. (2011) [resp. Jung and Tremayne (2006)] when it comes to forecasting.
This is the use of data and any information on how the series was generated, to suggest a subclass of parsimonious models worthy to be entertained. In achieving this time plot autocorrelation function, correlogram, test for randomness or stationary and the main partial autocorrelation functions are used to obtain the main properties of the series.
For example when reserves for outstanding liabilities are to be estimated in insurance companies, there is usually no follow-up data of individuals in the portfolio available and reported claims, categorized in different businesses and other baseline characteristics, are the only records. The reason that insurers do not use classical biostatistical exposure data, i.e., they do not follow every underwritten policy, might be because of the bad quality and complexity of such exposure data with many potential causes of failure which heavily affect the actual cost of a claim. When claim numbers are considered, then X is the underwriting date of the policy, and Y is the time between underwriting date and the report of a claim, the reporting delay. Truncation occurs when X + Y is smaller than the date of data collection. The mass of the unobserved, future triangle, J , then corresponds to the proportion of claims underwritten in the past which are not reported yet. The assumption of a multiplicative density means that the reporting delay does not depend on the underwriting date. Thus, calendar time effects like court rulings, emergence of latent claims, or changes in operational time cannot be accommodated in the model before further generalisations of the model are introduced. Nevertheless we restrict our discussion to the multiplicative model for several reasons. It has its justification as baseline for generalisations in many directions. It also approximates the data structure well enough in many applications. We will come back to this point when discussing our data example. The relevance of the multiplicative model also lies in the fact that it helps to understand discrete related versions that are used every day in all non-life insurance companies, see England and Verrall (2002) for an overview of those discrete models.
following the Box-Jenkins methodology. This method follows a number of steps. The first step is to transform data to stabilize the series and determine the order of integration d by appropriate stationary tests (Augmented Dickey-Fuller and Phillips-Peron tests). The second step is to determine the number of lags p for the autoregressive component by studying the autocorrelation function (ACF) and the order q for the Moving Average component MA (q) by considering the partial autocorrelation functions (PACF). The third step is to estimate the candidate models and run information criteria to select the best one. The fourth step run tests for autocorrelations and white noise residuals. In case of failure in the fourth step, procedure is repeated from the second step, otherwise the model is ready to use for forecast. Table 1 summarizes the best candidate models for constructed annual and quarterly ARIMA. All the models have in common an integrated component of first order; all the series are non stationary.
With respect to the DSGE models the most important finding is that their predictive scores are higher than their relative multivariate point forecast accuracy suggests. Therefore I conclude that the theoretical restrictions embedded in these models are helpful in producing a plausible covariance structure between the forecasts. The small DSGE model, which displayed relatively high forecast errors, is not worse than the small OLS-VAR and small BVAR in terms of predictive scores, while the multivariate density forecast accuracy of the medium DSGE model is even better. With only one exception (h = 1 and m = 3) it shows higher scores than all OLS-VARs and BVARs at all forecast horizons as well as for both variable selections. The medium DSGE model is competitive with the medium-large DSGE model for the small variable selection, but not for the medium selection. In fact, the medium-large DSGE outperforms all competitors at all forecast horizons with respect to the medium selection and except for the longest horizon also for the medium-large selection. The structure of this model is therefore rich enough to produce reasonable comovements between the 15 variables included into estimation.
Inflation is an economic phenomenon that have attracted the attention of many economists and researchers. There is a wide variety of opinions among researchers on the causes of inflation and how it could be reduced. In this study, we address the problem of inflation in Sudan using Box-Jenkins models. Where we explain and interpret the behavior of the inflation phenomenon through time series analysis autoregressive moving average ARIMA, during the period 1998-2013. During this period, the Sudanese economy has passed through dramatic changes, due to the separation of southern Sudan, the change of the national currency, from dinar to pound and the rise of the Sudan's debt outstanding to the International Monetary Fund. Sudan has suffered from inflation during the nineties, where inflation rates have risen dramatically and continued to rise until 1997 at that time the inflation rate reached 130 %. The discovery of the oil in the south region of Sudan had a direct impact in reducing the inflation rate, where in 2007 the inflation rate is reduced to *Corresponding author: *,1,2 Dr. Elsiddig Idriss Mohamed