Some information will always be lost due to using one of the candidate models. Based on the AIC values, the model that minimizes the estimated information loss more is ARIMA-GARCH. ARIMA is 0.46 times as probable as ARIMA-GARCH to minimize the information loss. The bias proportion, the variance proportion, and the covariance proportion sum up to 1. While the bias proportion measures how far the mean of the forecast is from the mean of the actual series, the variance proportion measures how far the variation of the forecast is from the variation of the actual series. The remaining unsystematic forecasting errors are measured by the covariance proportion measures. Based on Table 7, the forecasts produced by ARIMA-GARCH are better since the bias and variance proportions are lower than those produced by ARIMA. Furthermore, MAPE values for in-sample and out-sample forecasts for ARIMA-GARCH are lower than those for ARIMA. It can be concluded that in the case of the selling prices of 1 oz Malaysiangold, the hybridmodel of ARIMA-GARCH can be an effective way to improve forecasting accuracy achieved by using ARIMA only.
A feature selection process can be used to remove terms in the training dataset that are statistically uncorrelated with the class labels, thus improving both efficiency and accuracy. It implies not only cardinality reduction, which means arbitrary cut-off on the number of attributes when building a model, but also the choice of attributes, which implies that the model actively selects or discards attributes based on their usefulness for analysis. The subsets selected each time are compared and analyzed according to the formula assessment function. If the subset selected in step m + 1 is better than the subset selected in step m, the subset selected in step m + 1 can be selected as the optimum subset. The feature selection problem has been modelled as a mixed 0-1 integer program . SVM-RFE is an SVM-based feature selection algorithm created by . In addition to reducing classification computational time by reducing feature set, it can improve the classification accuracy rate . In recent years, many scholars improved the classification effect in medical diagnosis by taking advantage of this method [13, 14].
Based on the presence of volatility clustering in the residuals and the ARCH- LM test result, it can be concluded that the model was not a good fit. A better model for forecasting Malaysiangold was deemed necessary. A hybridmodel was considered an effective way to improve forecastaccuracy . In the study of symmetric and asymmetric Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models for forecasting Malaysiangold prices, a variant of GARCH, called TGARCH was shown to outperform GARCH, GARCH-M and EGARCH models . The TGARCH model is a GARCH variant that includes leverage terms for modeling asymmetric volatility clustering. Hence, the current study proposes using ARIMA-GRJ model to analyze the series understudied. Table 3 presents the estimation results for variance equation of the hybrid ARIMA (2, 1, 2)-GJR (1, 1) model as applied to Malaysiangold.
for forecasting space- time data with calendar variation effect. They found that the hybrid GSTARX-NN model gave more accurate forecasts than the traditional GSTARX models . Compared a few variants of exponential and backpropagation ANN models to forecast rice production. The results showed that neural network method is preferable to the statistical method since it results in lower Mean Square Error (MSE) and Mean Absolute Percentage Error (MAPE) . AI has also been used in the financial sector for various kinds of forecasts, analyses and decision making. Used of machine learning algorithms is explored to analyse effect of financial news on stock market prices. Support Vector Machine (SVM) and RF algorithms were used and it is concluded that the RF algorithm gives better accuracy in comparison with SVM algorithm . Proposed a procedure to construct Triangular Fuzzy Number from single point data and then used an Autoregressive model to forecast currency exchange rates of Association of South East Asian Nation (ASEAN) countries . A Fuzzy Neural System (FNS) to forecast the inflation rate obtaining better results in terms of RMSE than traditional methods . A fuzzy sets method predicting the Russia Trading System (RTS) index with a confidence level of about 90% though having only an incomplete data set at hand . A hybridmodel, wavelet radial bases function neural networks (WRBFNN), for forecasting of non-stationary time series and found that, in terms of MAPE and MSE, the proposed model is superior to the traditional wavelet feed forward neural networks (WFFN) model .
which overcomes many problems with low-demand SKUs and provides consistent measures across SKUs. Another metric is the Mean Absolute Scaled Error, or MASE, which compares the error from a forecastmodel with the error resulting from a naïve method. Slightly more complex is GMASE, proposed by Val- entin (2007), which is a weighted geometric mean of the individual MASEs calculated at the SKU level. Still other metrics are available, including those based on medians rather than means and using the percentage of forecasts that exceed an established error thresh- old.
We evaluate the out-of-sample forecasting ability of the three estimators, the LS, the RMA, and the GT, with two alternative forecasting methods. First, we utilize first 69 out of 139 observations to obtain h-step ahead forecasts. Then, we keep forecasting recursively by adding one observation in each iteration until we forecast the last observation. Second, we obtain h-step ahead forecasts using first 69 observations, then keep forecasting with a
For this research, a supply and demand chain simulation model, based on the actual GlaxoSmithKline (GSK) logistics system (SPIRiT), was devel- oped. In this model, we generated a series of random variables as the real daily demand, which are evenly distributed in a fixed range. This range is determined by SFA. Four GSK products were selected as samples to run the simulation and their specific information was extracted from SPIRiT. We then developed assumptions about sales forecasts analysts salary cost and effort to improve SFA and the qualification of expected earnings. Finally the key SFA of different items was determined based on the simulation results of annual profit, lost-demand cost and inventory cost.
As argued in the introduction, in many practical situations some observations are more important to forecast accurately than others. For example, in empirical macroeconomics, forecasting the start of a recession is of vital importance. Note that this start of a recession would most likely correspond with, for example, large neg- ative observations for output growth or large positive observations for the change in unemployment. Similarly, in financial applications one usually is particularly interested in accurately forecasting extreme negative returns on investments, as ev- idenced by the enormous interest in Value-at-Risk measures. Hence, when selecting among competing forecasting models, it makes sense to focus on these crucial obser- vations or, put differently, to put more weight on those observations relative to less important ones.
The Federal Reserve Bank of Cleveland, like a number of other central bank institu- tions, makes use of Bayesian Vector Autoregressions (BV ARs) in forecasting. BV ARs have a long history in forecasting, stimulated by their effectiveness documented in the sem- inal studies of Doan, Litterman, and Sims (1984) and Litterman (1986). In recent years, the models seem to be used even more systematically for policy analysis and forecasting macroeconomic variables (e.g., Kadiyala and Karlsson 1997, Koop 2010). At present, there is considerable interest in using BVARs for these purposes in a large dataset context (e.g., Banbura, Giannone, and Reichlin, 2010, Carriero, Kapetanios, and Marcellino 2009, 2010a). Putting BVARs to use in practical forecasting raises a host of detailed questions about model specification and estimation. Under some approaches, estimating and forecasting with a BVAR can be technically and computationally demanding. In general, one of the major stumbling blocks that has sometimes limited the use of BV ARs as models for forecast- ing and policy analysis has been the large computational burden they can pose, especially when Markov Chain Monte Carlo (MCMC) simulation methods are used for estimation. In many cases, such as a BVAR with a Minnesota prior patterned after Litterman’s (1986) specification, technically proper estimation requires MCMC methods. In particular, in a forecasting context, the natural point forecast from a BVAR is the mean of the posterior distribution of forecasts. In many instances, the posterior mean can only be computed by MCMC (e.g., Geweke and Whiteman 2006, Kadiyala and Karlsson 1997). The development of computing power has substantially alleviated this problem. However, in general the com- putation of nonlinear functions of the parameters such as impulse-response functions and multi-step forecasts still requires time consuming simulations, and the computing cost is particularly relevant when these exercises are recursively conducted over a long time span, or with models including more than a handful of variables.
In a setting where the primary financial statements have been converted from individual financial statements to consolidated financial statements in Korea, we examine the effect of segment information disclosed by the firm on analysts’ consolidated-base earnings forecastaccuracy. Since Korean firms have prepared the primary financial statements on a non-consolidated basis in the pre-IFRS regime, the adoption of International Financial Reporting Standards (IFRS) leads to a great deal of difficulties and complexities in making accurate consolidated forecasts for users of financial statements, even for financial analysts who are sophisticated users of financial statements. In this situation, we conjecture that the amount of details and types of information in segment disclosure will influence analysts’ forecastaccuracy. Consistent with the prediction, we find that financial analysts are able to make more accurate earnings projections when firms provide more disaggregated accounting figures by each segment. Moreover, we find that analysts can make forecasts more accurately when firms disclose more persistent earnings component (i.e., segment operating income). Furthermore, we find that the effect of the segment disclosure levels on analysts’ forecastaccuracy is more pronounced for firms with multi-segments. Our results indicate that disaggregated segment information is a useful source for financial analysts to have better understanding about complete picture of firms’ consolidated earnings and improve their forecasting performance.
It is a pleasure to participate in a volume celebrating the contributions to economics of Michio Mor- ishima, who was a colleague for many years. The eclectic nature of Michio’s extensive publications makes it impossible to choose any topic on which he has never written, and our chapter is related to Morishima and Saito (1964), who developed a macro-econometric model of the US economy. While Morishima and Saito (1964) focus on econometric equations with a close eye on economic-policy ana- lysis, we also consider the relationship between statistical forecasting devices and econometric models in the policy context. In particular, we investigate three aspects of this relationship. First, whether there are grounds for basing economic-policy analysis on the ‘best’ forecasting system. Secondly, whether fore- cast failure in an econometric model precludes its use for economic-policy analysis. Finally, whether in the presence of policy change, improved forecasts can be obtained by using ‘scenario’ changes, de- rived from the econometric model, to modify an initial statistical forecast. To resolve these issues, we analyze the problems arising when forecasting after a structural break (i.e., a change in the parameters of the econometric system), but before a regime shift (i.e., a change in the behaviour of non-modelled, often policy, variables), perhaps in response to the break (see Hendry and Mizon, 1998, for discussion of this distinction). These three dichotomies, between econometric and statistical models, structural breaks and regime shifts, and pre and post forecasting events, are central to our results.
The expectation of price accuracy begins as soon as the contract becomes effective, but the industry currently has neither standard practices nor expectations shared by all involved parties. Customers, manufacturers, GPOs and distributors are all impacted by problems caused by inconsistent implementation of new contracts (whether GPO or local), GPO tier activations/ changes and expiring/extending contracts. These problems manifest themselves in costly rework due to increasing occurrences of:
la Diebold and Mariano (1995). First, West (1996) has showed that parame- ter estimation error may not be asymptotically irrelevant and may therefore affect the limiting distribution of the test statistics. Second, if models are nested, the statistics based on average comparisons of prediction errors have a degenerate limiting variance under the null hypothesis and they are not asymptotically normally distributed. For nested models McCracken (2007) and Clark and McCracken (2001) derive the appropriate non Gaussian limit for tests of, respectively, equal MSPE and FE; the critical values are tab- ulated across two nuisance parameters (the ratio of the magnitudes of pre- diction sample to estimation sample and the number of additional regressors in the larger model) and they are, in general, valid only for one-step ahead predictions. The test of forecast encompassing for nested models proposed by Chao, Corradi and Swanson (2001) does not suffer from this degeneracy: its limiting distribution is a chi-square under the null hypothesis. A different approach is taken by Giacomini and White (2006) that focus on comparing
which is a required for efficient supply chain operations and overall firm profitability (Yaro, Brent, Travis & Matthew, 2015) and retail pharmacy is not an exception. Accurate forecast is a requirement for optimal inventory control and customer demand and reduction of operational costs (Olimpia, Nela & Camelia, 2016). Accurate forecasts help companies prepare for short and long term changes in market conditions and improve operating performance (Wacker & Lummus, 2002). Chihyun and Dae-Eun’s (2016) study also confirmed that accurate demand forecasting is important for sustaining the profitability of the firm. This is because demand forecasts influence the firm in various ways, such as, in strategy-setting and developing production plans. Gupta, Maranas, McDonald and Doganis, (2000) asserted that exact sales forecasting is utilized for capturing the tradeoff between customer demand satisfaction and inventory costs. For this usefulness, especially in this recent rapid changing and less predictable business environmental variables, managers and academics have no choice but to devote more attention to how forecasting can be improved to increase demand forecastaccuracy (Gilliland, 2011; Rakesh & Dalgobind 2013) In a retail pharmacy, successful sales forecasting systems can be very beneficial, due to the short expiring dates of many pharmaceutical products and the importance of the product quality which is closely related to the human health (Doganis, Alexandridis, Patrinos & Sarimveis, 2006 as cited by Neda, Mohammad & Hamid, 2014). As accurate demand forecasting is crucial to manufacturing companies and it must be taken more seriously in retail outlets such as supermarkets and retail pharmacy because of the high stock level, high customer needs and traffic they experience daily.
External Trade (EXT) and Reserve Money (RM), refer  and . The data dealt with would be analyzed both descriptively and quantitatively. Diagrams and tables were used to help in the clarifying examination. UTS focuses as a time series that includes single recognition recorded successively through time and MTS analysis is used to appear and clear up the relationship among a gathering of the economic indicators. In this way, this investigation concentrate on the UTS method by means of ARIMA model while MTS strategies by means of VAR. All estimations were completed using R and E views programming.
Let us go back to the LPG case. Company X is a client of ORTEC that is a supplier of liquefied propane gas (LPG) for professional-, agricultural-, and home use. This propane gas is primarily used to heat premises but in this chapter it becomes clear that that is not the only purpose. The clients of Company X generally need replenishment just before running out of gas. In order for Company X to achieve this without having to visit the customers too often, just to be on the safe side, a good forecast is required on when the storages get below their safety stock level. The forecast engine is part of ORTEC Inventory Routing (OIR). Good forecasts are of great importance since the forecast indicates how much LPG should be delivered to each customer. Multiple customers can be visited in a single route. When a route is planned, for each customer a certain order volume is reserved. In reality however, the truck driver fills the tank until it is full. At the end of the planned route, the truck should be exactly emptied out. When the forecasts are inaccurate, two things could happen: the truck is empty before having visited the last customer(s) on its route (which happens in 38% of the trips) or LPG is left after visiting the last customer on the route. When the latter happens, another customer must quickly be found in order to empty the truck anyway. Both should be avoided. Figure 3.1 plots the difference between the planned amount of LPG and the amount that is actually delivered. We see that there is a tendency to deliver more than planned.
Sandhya et al.  developed a hierarchical hybrid intelligent system model integrating de- cision tree, support vector machines and an en- semble of various base classifiers. The hybridmodel maximized the detection accuracy and reduced the complexity of computations. The major advantage of the ensemble approach is the fact that information from different classifi- ers is merged to decide whether an intrusion has occurred or not. The effectiveness of the model is based on the diverse nature of the base clas- sifiers. Eskin  designed an unsupervised anomaly detection model and implemented it using three unsupervised techniques such as clustering, k-nearest neighbor and Support Vec- tor Machine. The advantage of these algorithms is that they can be applied to any feature space to model diverse kinds of data. The model can be extended to more feature maps with various kinds of data and perform widespread experi- ments on the data. Li et al.  proposed an efficient intrusion detection method combining clustering, ant colony algorithm and SVM that resulted in an efficient model that determined whether a network data packet is classified as normal or abnormal. However, the model had a strategy of selecting a small training data set which may not be suitable to multiple classifi- cation problems. Bhavesh et al.  developed a hybrid technique to identify anomalies by integrating Latent Dirichlet Allocation (LDA) and genetic algorithm (GA). LDA determines the subset of attributes for anomaly identifi-
However, post deployment and reactive measures such as software patching and upgrade, damage assessment, logging analysis, installation of intrusion detection and prevention systems to mention a few have not stopped or deterred attackers from continuously bombarding these software assets using more sophisticated attacks to exploit the software vulnerabilities (Kar and Panigrahi, 2013; Li et al., 2017). These myriads of unending threats have prompted software security experts to propose proactive strategies of building security into the traditional Software Development Life Cycle (SDLC) hence the Secure Software Development Lifecycle (SSDL) paradigm came to life (OWASP, 2014a; Tatli, 2018). Given the unique culture and practices of disparate IT firms, many tech giants have thrown their weight behind the creation of proprietary software security models such as Trustworthy Computing Secure Development Lifecycle from Microsoft (Microsoft, 2005), CLASP (Comprehensive Lightweight Security Application Process) and Open SAMM (Software Assurance Maturity Model) from OWASP (Open Web Application Security Project) (OWASP, 2016) and Touchpoints from Cigital (McGraw, 2006). Interestingly, over 60 tech- fortune companies such as SONY, VISA, Intel, Microsoft etc. have collaborated to develop a descriptive framework tagged BSIMM (Building Security In Maturity Model) (BSIMM, 2014). These new paradigms and the aggressive allocation of resources (funds and man-power) to such projects have empowered the development and security team to address security issues during the earliest stages of system development (Karpati et al., 2014). In the secure software development lifecycle, one of the critical approaches to defending the organizations software infrastructure is to anticipate the nature of the attacks from the attacker’s perspective before they happen and strategizing mitigation plans in order to prevent these attacks from being successful. This is called Threat Modeling (Groves, 2013).
The key difference in the new revenue recognition model compared to previous ones is its focus on the customer contracts. The basic unit of interest is a contract, and no two contracts will always be alike. Based on this, two companies operating in the same industry could account for a similar transaction differently depending on their contractual obligations. Thus, contract terms and related performance obligations are crucial to revenue recognition. When considering which industries will be most affected by the new revenue recognition standard, the focus should not be the particular industry but rather the particular contracts. The industries most likely to be significantly affected are the ones which deal with contracts that might be accounted for differently under the new standard. Industries with short-term contracts and simple transactions, such as retail and consumer goods, are unlikely to see changes in their accounting treatment. Whereas industries with long-term contracts, complex transactions containing multiple elements, and possibly requiring customization of the provided products and services, will be likely candidates for significant changes. (Ciesielski & Weirich 2011.)
An important task for property analysts is the application of various methodologies to identify the best performing models that provide better forecasts. However, this task requires time and knowledge to apply various methodologies in different scenarios. Supplementary, the characteristics of the data (for example, strongly trended, non-trended) can also determine the extent to which various methodologies can be used (Brooks and Tsolacos, 2010; Matysiak and Tsolacos, 2003). There have been several scholarly efforts in determining the outperforming models based on the aforementioned accuracy measures. For instance, the regression equation is better than exponential smoothing, error correction model and naïve technique (D'Arcy, McGough, and Tsolacos, 1999; McGough, Tsolacos, and Olli, 2000); ARIMAX outperforms regression equations (Karakozova, 2004); Bayesian VAR is over ARIMA, single equation and simultaneous equation regression (Stevenson and McGarth, 2003). Irrespective of the fact, it must be noted that all these techniques have their own pros and cons and would be used for various situations. For example, error correction models would be used when one has good reasons to believe that the time series contains a unit root. Also, it is useful if one is trying to determine how quickly the time series reverts to a long run mean following a shock. However, regression (for example, pooled Ordinary LeaseLeast Squares) might be useful if dealing with a simple cross section of data.