• No results found

Improving the Performance of Popular Supply Chain Forecasting Techniques

N/A
N/A
Protected

Academic year: 2021

Share "Improving the Performance of Popular Supply Chain Forecasting Techniques"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

Improving

the Performance

of Popular Supply Chain

Forecasting Techniques

Georgios P. Spithourakis

Forecasting & Strategy Unit, School of Electrical and Computer Engineering, National Technical University of Athens, Greece

giorgos@fsu.gr

Fotios Petropoulos

Forecasting & Strategy Unit, School of Electrical and Computer Engineering, National Technical University of Athens, Greece

fotis@fsu.gr

M. Zied Babai

BEM Bordeaux Management School, France mohamed-zied.babai@bem.edu

Konstantinos Nikolopoulos

BEM Bordeaux Management School, France The Business School, Bangor University, Bangor, UK k.nikolopoulos@bangor.ac.uk

Vassilios Assimakopoulos

Forecasting & Strategy Unit, School of Electrical and Computer Engineering, National Technical University of Athens, Greece

vassim@fsu.gr

This article empirically investigates the extension of the use of an

aggregation-disaggregation forecasting approach for intermittent demand (ADIDA) to fast-moving demand data, addressing the need of supply chain managers for accurate forecasts. After a brief introduction to the

framework and its background, an experiment is set up to examine its performance on data from the M3-Competition. The relevant forecasting methodology and in-sample optimization techniques are described in detail, as well as the core experimental structure and real data. Empirical results of forecasting accuracy performance are presented and discussed, placing further emphasis on the managerial implications of the

framework's being a simple, cost-efficient, and universally implementable forecasting method self-improving mechanism. Finally, all conclusions are summarized and guidelines for prospective research are proposed. Keywords: forecasting, fast-moving demand, temporal aggregation, in-sample optimization, forecasting framework, empirical investigation

Introduction

Practitioners of supply chain management must make use of a variety of up-to-date tools to reach informed and successful decisions. Demand and inventory forecasts are necessary for strategic planning and operational efficiency in a wide range of topics, such as production and stock control, and managers are more than willing to spend money, time, and effort to acquire them. Although slower-moving items may constitute up to 60% of the total stock value at a supply chain (Johnston et al., 2003), the remaining items present fast-moving behavior. Therefore, improved estimations concerning these items are likewise important for efficient supply chain management and may lead to significant cost savings.

The main aim of this study is to present the potential forecasting accuracy improvement benefits associated with the practice of using the ADIDA forecasting framework (aggregate-disaggregate intermittent demand approach; Nikolopoulos et al., 2011) to produce forecasts for non-intermittent, fast-moving demand. Our proposition will be backed up by the empirical evaluation of the expected accuracy improvement on real data. The incentive was provided by ADIDA's alleged function as a self-improving mechanism, paired with its computational simplicity. Despite the fact that ADIDA was initially proposed as a forecasting framework for intermittent demand time series, it does not require the series themselves to be intermittent by nature, which means that the methodology can

© Copyright BEM ISSN print 1625-8312 ISSN online1624-6039

An International Journal

(2)

identically be applied to forecast fast-moving demand, as well. After a short review of the ADIDA background, this article quickly proceeds to the thorough description of the conducted experiment. First, important aspects of the incorporated forecasting methodology and estimation procedures are introduced, followed by the presentation of the empirical data. A complete report of the experiment's layout is then given, accompanied by its empirical results. The study's outcomes are discussed, with emphasis on the managerial implications and the practical benefits that can be obtained. Finally, conclusions are summed up and proposals for future research are made. Details related to the implementation of the forecasting methods and the forecasting accuracy measures discussed in this article are presented in Appendices A and B.

Background and Literature

ADIDA is a forecasting framework that was originally developed for intermittent demand. It can be implemented in four simple steps: 1. Gather the original data

2. Apply non-overlapping temporal aggregation at an aggregation level, A

3. Extrapolate the aggregate time series by means of a forecasting method, F

4. Disaggregate aggregate forecasts back to the original time scale via a disaggregation algorithm, D

In more detail, the original data must first be collected so that a time series to be forecast is formed. The observations must then be sequentially allocated into non-overlapping time buckets of length

A, starting from the end of the series and going backwards, which leads to an N mod A unused observation at its beginning. The content of each time bucket is aggregated to result in the aggregate time series, which is extrapolated by use of a forecasting method and after that the forecasts are broken down into the original

time scale by use of a disaggregation algorithm, typically a set of disaggregation weights. A specific ADIDA implementation is briefly represented as ADIDA(A, F, D). Unitary aggregation level, that is, ADIDA(1, F, D), is equivalent to the traditional use of a forecasting method, outside the context of ADIDA framework. For example, Figure 1 illustrates the ADIDA(4, Naïve, EQW)implementation where quarterly data are aggregated to create yearly time buckets, extrapolated with naïve and disaggregated back to quarterly frequency with equal weights (EQW).

This approach made it possible for supply chain managers to successfully use forecasting methods that were until now suitable only for fast-moving data to forecast intermittent demand, such as slow-moving spare parts and stock items, a possibility that can be mostly attributed to a reduction of the accumulated demand's intermittency. Not only was the forecaster's toolkit expanded to incorporate popular methods, for example, exponential smoothing methods and the theta method, but also ADIDA seemed to improve the accuracy of forecasting methods specific to intermittent demand. In particular, Nikolopoulos et al. (2011) applied the framework to produce rolling forecasts for a dataset of 5,000 intermittent demand time series

from stock-keeping units (SKU) of the British Royal Air Force (RAF). Testing naïve and SBA (Syntetos & Boylan, 2001) forecasting methods, the latter of which is an extension of Croston's forecasting method (Croston, 1972), at various aggregation levels and with alternative disaggregation weights, they empirically showed that ADIDA can result in lowered forecasting errors and characterized it as a forecasting method self-improving mechanism. A key concept of ADIDA is that of temporal aggregation, which is associated with the risk of information loss and data model modification. A comprehensive study of temporal aggregation was conducted by Silvestrini and Veredas (2008), whereas fewer studies have considered the intermittent demand case (Brännäs et al., 2002; Willemain et al., 1994). Other investigations have looked into the diversity of information from different time aggregations and a forecast-combination approach was successfully tested on tourism demand (Andrawis et al., 2011). In all cases, determination of an appropriate aggregation level seems crucial for the final outcome. This issue was addressed by Nikolopoulos et al. (2011), who proposed a managerial heuristic for the aggregation level, setting it equal to an item's lead time plus a review time (typically, one period). Further research into temporal aggregation and its effect on stock performance measures was conducted in a subsequent study (Babai et al., 2011).

The performance of three disaggregation algorithms has so far been examined (Nikolopoulos et al., 2011), all based on a set of fixed weights: EQual Weights (EQW), PRevious Weights (PRW), and AVerage Weights (AVW). EQW, which means that all weights are equal to the multiplicative inverse of the aggregation level, produced the best results in terms of forecasting accuracy and, therefore, is the disaggregation scheme chosen for the scope of this research.

Improved

estimations

concerning

fast moving items

may lead to

significant cost

(3)

Experimental Structure

Forecasting Methodology

In this research, forecasting methods from the family of exponential forecasting methods, widely used in business and industry (De Gooijer & Hyndman, 2006), were tested. Naïve is the simplest statistical forecasting method, requiring neither initializations nor optimizations. The single exponential smoothing (SES) method uses the series level and a corresponding smoothing parameter, and Holt's method (Holt, 1957) adds in the series, supposedly linear, trend and a trend-smoothing parameter. Damped exponential smoothing (Gardner, 1985) is similar, but it also dampens the extrapolated trend with a damping factor, thus calling for three parameter optimizations and two initializations (level and

trend). These multidimensional and continuous optimizations may be time consuming, whereas suboptimal choices can compromise the method's forecasting accuracy. All optimizations were based on the minimization of the in-sample MSE error and the linear regression parameters were used for the initializations. The classic theta method (Assimakopoulos & Nikolopoulos, 2000), which has similar computational requirements and perplexity to SES, was also investigated because it produced the best forecasting accuracy for the monthly data of the M3-Competition (Makridakis & Hibon, 2000). More details on the forecasting methods considered in our research are provided in Appendix A.

Time series can be broken down into separate components, accounting for different

characteristics of the original series, which can usually be assumed to be combined in a multiplicative way, for sake of simplicity. Traditional partitions are trend, seasonality, cycle, and randomness, which can be extracted via a decomposition process. Classical decomposition is the most widely used, being described in many forecasting textbooks (e.g., Makridakis et al., 1998). Seasonality can be handled by deseasonalizing, forecasting, and then reseasonalizing the time series.

In various aspects of forecasting accuracy evaluation, calculation of error measures is needed. Error measures can be calculated either using in-sample observations, involving exclusively those samples originally used to construct the forecasting model, or using out-of-sample observations, where the newly available values of

Figure 1

(4)

the time series are considered. Therefore, in-sample errors provide a measure of goodness of fit of a model and out-of-sample errors are used to evaluate a method's forecasting accuracy. Two common error measures are the mean square error (MSE) and the symmetric mean average percentage error (sMAPE). In this study, in-sample MSE was used for the optimization of the smoothing parameters, selecting the ones that minimize it, and out-of-sample sMAPE was used as a measure of forecasting accuracy. The choice of sMAPE allows for comparison with the results of the M3-Competion. More details on these two error measures are given in Appendix B.

Estimation of an Optimal Aggregation Level per Series

Empirical results indicate that there might be an optimal aggregation level per series, probably dependent on its specific quantitative and qualitative characteristics. Unfortunately, the managerial heuristic of Nikolopoulos et al. (2011) can be applied only in the specific case of SKU time series and when the lead time is available. A statistical estimation criterion can be used instead to estimate optimal aggregation levels per series from the in-sample data. However, this optimal value is eventually dependent on the out-of-sample data. To select an optimal level, that is, an optimal ADIDA model, one should calculate the criterion values for some candidate models and then select the one that returns the minimum criterion value. In this study we will consider the mean square error (MSE), the Bayesian information criterion (BIC), and the Akaike information criterion (AIC).

The value of the MSE criterion coincides with the in-sample mean square error. MSE is one of the most widely used measures of goodness of fit of a statistical model to the available real data and it is directly tied with the variance of the errors. Its simple formula allows for complex mathematical manipulation. MSE has been used extendedly as a basis for other

statistical criteria. The MSE must be calculated using the in-sample observations that have been applied to compose the aggregate series, that is, the effective in-sample of the number of time buckets multiplied by the aggregation level.

BIC is a fundamental criterion for statistical model comparisons and it is alternatively referred to as the Schwarz information criterion (Schwarz, 1978). It is related to the maximum likelihood estimation, with an extra penalty term that grows with respect to the order of the model, k, on the ground of parameter parsimony. Such an approach is justified by the fact that higher-order models tend to overfit the in-sample data and thus result in a poorer fit with new data. Furthermore, higher-order models are harder to construct, train, and maintain due to their computational complexity (execution time and memory requirements). Under the assumption that model errors are independent and normally distributed, we can use the following formula to calculate BIC (except for a constant term, which does not affect the model selection process):

AIC= N.In(MSE) + k.In(N) The formula incorporates MSE and the penalty term, which depends on the effective in-sample number of observations, N, and on the order of the model, k, which is equal to the aggregation level because this is the exact number of in-sample values that the model uses to produce each forecast.

AIC (Akaike, 1974) is another model comparison criterion similar to BIC. However, its penalty term for higher-order models is larger than the one assigned by BIC. The criterion, which is based on the entropy of information, is a relative measure of the loss of information caused by the use of a model to describe real data. Assuming again independent and normally distributed model errors, the value of the criterion is given by the formula:

AIC= N.In(MSE) + 2k

Empirical Data

As an experimental basis, we used the 1,428 monthly time series of the M3-Competition, which is the most extensive completed forecasting competition to date, studying the accuracy of a wide range of forecasting methods applied to 3,003 time series from six diverse fields (micro, industry, macro, economics, demographics, and other data) and four different time scales (monthly, quarterly, yearly, and other data). All time series were strictly positive and represented fast-moving demand. Descriptive statistics of the demand data are provided in Table 1.

The computed forecast errors can be used as benchmarks for other, untested methods and forecasting approaches, such as the one considered in this article. During the M3-Competition, 18-point forecasts were requested for the monthly series. The monthly

1,428 series Mean Demand Standard Deviation Demand Minimum 1268.235 96.845 Lower Quartile 3787.375 592.691 Median 4851.663 919.795 Upper Quartile 5919.231 1386.730 Maximum 19331.746 16445.442 Table 1

(5)

dataset had a median of 115 observations per series with no series having fewer than 50 observations, plus 18 observations corresponding to future values.

Experimental Setup

The implemented experimental structure will be described in this section. The last 18 observations, constituting the out-of-sample part of the series, were held out from each monthly time series so that the experimental results would be directly comparable with the results of the M3-Competition. Subsequently, the remainder (in-sample part) of the time series was deseasonalized, using the original seasonal indices provided by the M3-Competition. These, in turn, were derived through the process of multiplicative classical decomposition applied on time series exhibiting seasonality. The ADIDA framework was then applied to the deseasonalized series to produce 18-point forecasts. The forecasting method was selected among naïve, SES, Holt, damped, and theta, although any other extrapolation method can also be used. An upper limit was set for the maximum aggregation level up to which the ADIDA models would be considered and all models up to that bound were run to produce the deseasonalized forecast models. From the available disaggregation algorithms, we decided to stick to disaggregation by equal weights (EQW) because it produced the most relevant results (Nikolopoulos et al., 2011). Because all data represented strictly positive variables, any non-positive forecast model values were replaced by the corresponding values of the naïve forecast model. The resultant forecast models were reseasonalized and compared to the original in-sample observations to obtain the model selection criterion values. Be it MSE, BIC, or AIC, the criterion must be calculated over the effective in-sample part of the series, that is, taking into consideration only the exact observations that were used to build each ADIDA model. The

ADIDA model returning the lowest criterion value was considered optimal and its forecasts were used as the framework's outcome forecasts.

In the end, forecasts were contrasted to the held-out out-of-sample data to calculate forecasting accuracy measures. Symmetric MAPE was preferred for its use in the M3-Competition and its ease of comparison among different series.

Results and Discussion

Conducting the experiment on real data gives valuable results that will

be presented here. The figures in this section depict the forecasting accuracy in terms of out-of-sample sMAPE error for an 18-period forecasting horizon, averaged over the whole monthly dataset of 1,428 time series. The horizontal axis shows the upper search bound for the estimation of the optimal aggregation level, that is, the number of models considered, and different lines correspond to different estimators. Results for a unitary aggregation level are equivalent to the simple use of the selected forecasting method and thus can be used as a benchmark for the framework's performance.

Figure 2

ADIDA (optimal, Naive, EQW)

Figure 3

(6)

Forecasting methods can be divided into two categories, producing effectively different results. ADIDA with naïve (Figure 2), SES (Figure 3), or classic theta (Figure 4) seem to lead to significant accuracy improvements. BIC and AIC estimators induced the largest error drops for a maximum aggregation level of 12 periods. Lesser error reductions occurred for shorter aggregation levels, whereas, in the cases of SES and theta, the errors became unstably divergent when longer aggregations were allowed. Naïve yielded the most coherent improvements, with all three measures finally converging at a sMAPE around 14.50% (from an initial 16.89%). Nevertheless, BIC and AIC lines reached as low as just below 14.00% for a shorter upper bound of 12 periods. The performance of the estimations is bounded by the randomness of the series and the fact that in-sample optimizations do not always carry themselves to out-of-sample accuracy.

However, the aforementioned divergent behavior was more prevalent for the Holt (Figure 5) and damped (Figure 6) methods. Any improvements occurred at relatively short maximum aggregation levels and seemed circumstantial and fairly small. Inclusion of larger aggregation levels resulted in increasingly deteriorating forecasting accuracy for all three estimators. Again, series randomness and unexpected drifts may compromise the estimation's performance.

The results are summed up in Tables 2 and 3. All combinations of forecasting methods and estimators yielded some degree of error reduction; however, not all reductions were statistically significant. Specifically, at _= 0.05

level of significance, statistical significance was found for naïve (p< 0.001) and SES (p = 0.004 for MSE and p<0.001 for BIC and AIC) and for theta with BIC (p = 0.003) and AIC (p = 0.021) estimators. These combinations also exhibited the largest and most consistent error reductions, with the naïve-AIC pair presenting a 17.29% error

Figure 4

ADIDA (optimal, theta, EQW)

Figure 5

ADIDA (optimal, Holt, EQW)

Figure 6

(7)

improvement. Results varied with the upper bound for the aggregation level (Table 3); nevertheless, in the cases that presented significant improvements, a maximum level around 12 gave fairly good results.

Consequently, ADIDA appears to perform well in many cases also for non-intermittent demand data. Yet such performance improvements can be traced back to the original proposition of ADIDA as a forecasting framework for intermittent demand, as the procedure remained essentially the same. Similarly to the fact that cumulative demand of intermittent series will exhibit significantly less intermittence, any aggregate series would be expected to have a considerable reduced coefficient of variation compared to the original series. In other words, aggregation may smooth out time series randomness; however, extremely long aggregations may also destroy the series' information content. In addition, aggregation with larger buckets leads to aggregate time series with very few observations, which may prove inadequate for proper model training, especially in models with many parameters. This accounts for the better

performance of the framework with simpler models, for example, naïve, and the instability of Holt and damped (requiring the training of two and three parameters, respectively). As a result, ADIDA must be preferred with simpler extrapolation models.

Managerial Implications

In addition to the empirical findings of the study of Nikolopoulos et al. (2011), the function of the ADIDA framework as a self-improving process seems to also hold for fast-moving demand. Specifically, the framework seems to significantly improve the forecasting accuracy of simple and popular methods, such as naïve and SES, through straightforward and inexpensive mathematical manipulations. Every stage of ADIDA is easily implementable on a wide range of computer programs, from high-cost sophisticated forecasting packages to simpler spreadsheet applications (http://www.fortank. com/adida). All one needs is the implementation of standard forecasting procedures, that is, classical decomposition and extrapolation methods, and other basic operations, such as an

aggregation routine. Moreover, ADIDA seems to fit particularly well with unsophisticated forecasting methods, which need no or few optimizations and initializations, consequently reducing the possibility of poor model training. However, the most significant benefit of ADIDA is that it provides an inexpensive methodology that manages to produce highly accurate forecasts, nearly as accurate as those of costly commercially integrated forecasting solutions. Therefore, ADIDA offers a cost efficient and universally implementable forecast accuracy-improving mechanism. Indeed, ADIDA with naïve and BIC or AIC estimator produced a sMAPE of 13.97%, very close to 13.85% of classic theta, the best-scoring method for monthly data of the M3-Competition. Furthermore, theta itself appeared to improve even more, with its error sinking as low as 13.66%. These lowered errors in turn may correspond to better-informed decisions for better and more profitable supply chain management.

Conclusions and Extensions

This empirical study considered the extension of the application of the ADIDA forecasting framework, which had until now only been used with intermittent demand, also to non-intermittent, fast-moving demand data. The experimental results were consistent with the previous results of Nikolopoulos et al. (2011), portraying the function of the framework as a forecasting method self-improving mechanism. This effect was particularly prominent for simpler forecasting methods, namely naïve, SES, and theta, inducing statistically significant error improvements. On the contrary, no statistically significant improvements were found for the Holt and damped methods, which are considerably more complex. The empirical results appear promising from a managerial point of view, allowing for forecasting accuracy improvements through simple and inexpensive procedures.

SMAPE (%) METHOD M3 R

EPLICATION(A = 1) MSE BIC AIC NAÏVE 16.91 16.89 14.41* 13.98* 13.97*

SES 15.32 14.65 14.51* 14.04* 14.04*

HOLT 15.36 15.33 15.31 15.26 15.23 DAMPED 14.59 14.46 14.45 14.36 14.35 THETA 13.85 13.99 13.94 13.66* 13.75*

*STATISTICALLY SIGNIFICANT AT Α= 0.05 LEVEL OF SIGNIFICANCE

Table 2

Forecasting Accuracy

Maximum Aggregation Level Method MSE BIC AIC

Naïve 20 12 11 SES 18 15 12 Holt 2 3 3 Damped 2 7 6 Theta 18 14 12 Table 3

(8)

Accuracy improvement is effectively dependent on the selected aggregation level. Estimation techniques, such as MSE, BIC, and AIC, were successfully used to acquire appropriate aggregation level values. However, an upper bound perhaps should be set for this selection so that the chosen level will not be extremely large. In the cases of naïve, SES, and theta, which presented the most important improvements, MSE resulted in increasingly more accurate forecasts because more ADIDA models are considered, up to increasingly larger aggregation levels. Although its behavior seemed quite stable, it was connected with comparatively smaller accuracy improvements. However, BIC and AIC seemed to almost equivalently induce deeper error drops; nevertheless, they may exhibit instability when larger aggregations are also allowed. This trade-off can be dealt with by setting an upper bound for the aggregation level around 12 periods.

Further research should be conducted to investigate the use of other forecasting methods that until now have been used outside the context of ADIDA. Moreover, the experiment of this study could be replicated with different data sets so as to reach more general conclusions about the function of ADIDA as a self-improving mechanism. Empirical results indicate that there might be an optimal aggregation level per series; however, additional research is required to address this issue in depth. Sophisticated algorithms could be used for the estimation of such optimal levels, taking into consideration all quantitative, qualitative, or latent available information of the time series and possibly supplanting the more generic MSE, BIC, and AIC estimators.

References

Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716-723.

Andrawis, R. R., Atiya, A. F., & El-Shishiny, H. (2011). Combination of long term and short term forecasts, with application to tourism demand forecasting. International Journal of Forecasting, 27(3), 870-886.

Assimakopoulos, V., & Nikolopoulos, N.

(2000). The theta model: A

decomposition approach to forecasting.

International Journal of Forecasting, 16(4), 521-530.

Babai, M. B., Ali, M. M., & Nikolopoulos, K. (2011).Impact of temporal aggregation on stock control performance of intermittent demand estimators: Empirical analysis. Omegadoi:10.1016/ j.omega.2011.09.004.

Brännäs, K., Hellström, J., & Nordström, J. (2002). A new approach to modelling and forecasting monthly guest nights in hotels. International Journal of Forecasting, 18(1), 19-30.

Croston, J. D. (1972). Forecasting and stock control for intermittent demands.

Operational Research Quarterly, 23(3), 289-303.

De Gooijer, J. G., & Hyndman, R. J. (2006). 25 years of time series forecasting. International Journal of Forecasting, 22, 443-473.

Gardner, E. S. (1985). Exponential smoothing: The state of the art. Journal of Forecasting, 4, 1-38.

Holt, C. C. (1957). Forecasting seasonals and trends by exponentially weighted averages. O. N. R. Memorandum 52/1957. Pittsburgh: Carnegie Institute

of Technology. Reprinted with

discussion in 2004. International Journal of Forecasting, 20, 5-13.

Johnston, F. R., Boylan, J. E., & Shale, E. A. (2003). An examination of the size of

orders from customers, their

characterization and the implications for inventory control of slow moving items. Journal of the Operational Research Society, 54, 833-837.

Makridakis, S., & Hibon, M. (2000). The M3-Competition: Results, conclusions and implications. International Journal of Forecasting, 16(4), 451-476.

Makridakis, S., Wheelwright, S. C., & Hyndman, R. J. (1998). Forecasting: Methods and applications(3rd ed.). New York: John Wiley and Sons.

Nikolopoulos, K., Syntetos, A. A., Boylan, J. E., Petropoulos, F., &

Assimakopoulos, V. (2011). An

aggregate-disaggregate intermittent

demand approach (ADIDA) to

forecasting: An empirical proposition and analysis. Journal of Operational Research Society, 62, 544-554.

Schwarz, G. E. (1978). Estimating the dimension of a model. Annals of Statistics, 6(2), 461-464.

Silvestrini, A., & Veredas, D. (2008). Temporal aggregation of univariate and multivariate time series models: A survey. Journal of Economic Surveys, 22(3), 458-497

Syntetos, A. A., & Boylan, J. E. (2001). On the bias of intermittent demand estimates. International Journal of Production Economics, 71, 457-466. Willemain, T. R., Smart, C. N., & Schwarz, H. F. (1994). Forecasting intermittent

demand in manufacturing: A

comparative evaluation of Croston's method. International Journal of Forecasting, 10(4), 529-538.

(9)

About the authors

Georgios SPITHOURAKISholds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece.

Since 2009, he has been a Research & Development Assistant at the Forecasting & Strategy Unit, School of Electrical and Computer Engineering, NTUA.

Fotios PETROPOULOSis a Research Associate in the School of Electrical and Computer Engineering at the National Technical University of Athens. He

is the coordinator of the Forecasting & Strategy Unit directed by professor V. Assimakopoulos. His research interests are in Statistics, Timeseries forecasting, Business Forecasting Information Systems, Windows Programming and Software Engineering. He has published several original articles in referred academic journals (JORS, IJEF) and presented his work in major international conferences (INFORMS, ISF, EURO, OR).

M. Zied BABAI is Associate Professor in Operations Management at BEM-Bordeaux Management School. He holds a PhD in Industrial Engineering from

the Ecole Centrale Paris where he also worked as a Teaching and Research Assistant for three years. From October 2006 to September 2008, he joined the Centre for Operational Research and Applied Statistics at the Salford Business School (UK), working on a project funded by the Engineering and Physical Sciences Research Council (EPSRC, UK). His research interests relate primarily to demand forecasting and inventory management with a special emphasis on the development of quantitative models.

Konstantinos NIKOLOPOULOSis the Director of forTANK. He is an expert in Demand Forecasting, forecasting the impact of Special Events, and

Forecasting Support Systems. He received his Engineering Doctorate from National Technical University of Athens. He has worked in Lancaster University Management School, then in Manchester Business School and now holds the Chair in Decision Sciences in Bangor Business School and is Visiting Associate in Herbert Simon Institute and Lancaster Centre for Forecasting. His work has appeared in the International Journal of Forecasting. He is co-originator of the Theta model and acted in 2004-2005 as Research Officer for the EPSRC research project "The Effective Design and Use of Forecasting Support Systems for Supply Chain Management".

Vassilios ASSIMAKOPOULOSis a professor of Decision Support Systems at the School of Electrical and Computer Engineers of the National Technical

University of Athens (NTUA). He has studied the applications of the Decision Making Systems on modern problems of entrepreneurial planning. He is specialized in the areas of Strategic Administration, Planning and Implementation of Information Systems for the Management of IT Projects, Management of Operational Resources, Statistics and Forecasts based on Time Sequences. He is the director of the Forecasting and Strategy Unit (FSU) of the NTUA. He has been the Digital Planning Secretary Special for the period 2004-2009.

Appendix A. Forecasting Methods

We use the following notations: : Actual demand at time t

: Estimate of demand at time t

: Estimate of the trend at time t

: Estimate of the level of the series at time t LRLt: Linear regression on actual demand at time t

a, b, ϕ: Smoothing parameters (0 ≤a, b, ϕ≤ 1)

In the following sub-sections we provide the equations for each forecasting method considered in our research work.

The Naïve Method

The estimate of the demand under the naïve method is given by

The Single Exponential Smoothing Method

(10)

The Holt Method

The estimate of the demand under the Holt method is given by

S

t

=

α

D

t

+

(1

α

)

S

t

1

+

T

t

1

T

t

=

β

S

t

S

t

1

⎥ +

(1

β

)

T

t

1

F

t

+

1

=

S

t

+

T

t

The Damped Trend Exponential Smoothing Method

The estimate of the demand under the damped trend exponential smoothing method is given by

S

t

=

α

D

t

+

(1

α

)

S

t

1

+

ϕ

T

t

1

T

t

=

β

S

t

S

t

1

⎥ +

(1

β

)

ϕ

T

t

1

F

t

+

1

=

S

t

+

ϕ

T

t

The Theta Method

The classic theta method is applied on a deseasonalised series as a three-step procedure:

1. Each time series is decomposed into two theta lines, the linear regression line (which is referred also as theta line (Θ=0)) and the theta line (Θ=2), which is calculated as follows:

2. The linear regression line is extrapolated in the usual way and the second line is extrapolated via single exponential smoothing.

3. The forecasts produced from the extrapolation of the two lines are combined with equal weights.

Appendix B. Forecasting Error Measures

The MSE and sMAPE measures are defined respectively as follows:

MSE

=

1

n

(

D

t

F

t

)

2 t=1 n

References

Related documents

Accounting theories Financial accounting theories income measurement financial reporting Management accounting theories Beaver/Demski 1979: The nature of income measurement

Others include putting the wrong fuel in a vehicle, overloading a vehicle with too many passengers or equipment, operating it in terrain and conditions it is not designed for,

We show that, as for similar systems for first-order logic with inductive definitions, our infinitary system is complete for the standard semantics and subsumes the explicit

MiRNAs might be potent candidates since we demonstrated a differential expression of miRNAs in high-grade serous tubal cancer compared to normal epithelium in BRCA1

variables were related to black student performance at School B, at School A, neither academic nor nonacademic variables correlated with grades or board scores for black students.

First prize for anthology in the literary contest of the Penn Club of Puerto Rico 2009 First prize for poetry in the literary contest of the Penn Club of Puerto Rico 2007

12 Surveys of professional forecasters have long included projections of CPI inflation or the GNP/GDP price deflator/price index, but only recently has any survey included