Energyconsumption estimation has an essential role in energy system management and the energy distribution network. However, to implement the forecasting models a high knowledge about programming languages, data storage systems, and data preparation is required. This fact can be a limitation for users working in this industry, which do not have this knowledge. In this way, this work proposes an application that makes it possible for the user to use different forecasting algorithms to predict the energyconsumption of buildings. The proposed application, developed in the scope of the SIMOCE project (ANI|P2020 17690), is meant to be used by users who do not have enough knowledge to implement these methods and run the models on their own. Five forecasting methods are considered in this application, namely ANN, SVM, and three FRBS methods including, HyFIS, WM, and GFS.FR.MOGUL. Additionally, four different strategies to manage and prepare the data sets to train the methods are included. In this way, the users have access to several of the most relevant forecasting methods in this domain, as well as different approaches for data treatment and training. Besides facilitating the forecasting process for users with no experience in machine learning, the proposed application also enables mitigating the difficulties in choosing the best approaches for each prediction circumstance (amount of data, available data variables), as it provides the means to reach results with the different algorithms, and thus supports the user decisions in selecting the most suitable solution for each situation. In addition to the abovementioned benefits, which are mostly directed to users in the power and energy domain, the presented application also has direct advantages in multiple other domains, e.g., as facilitator for the implementation of energy management methods in manufacturing systems. By helping to reach suitable forecasts of energyconsumption, the proposed application may be directly integrated with switch-off energy saving methods for production lines, such as the model proposed in [ 24 ]; or even integrated in advanced optimization methods for energy saving, as presented in [ 25 ]. Moreover, by being developed as a domain-agnostic application, it can be applied and used in multiple other application domains for forecasting purposes, simply needing access to the log of historic data.
Seasonal Variations are regular up and down movements in a time series that relate to recurring events. Demand for coal and fuel oil, for example, peaks during cold winter months. Seasonality may be applied to hourly, daily, weekly, monthly, or other recurring patterns. Many aircraft major assemblies/ parts show similar demand surges and need to be precisely predicted and supplied on monthly basis to optimize both aircraft operation as well as aviation budget. To achieve this end, monthly seasonality indices for TBO parts have been worked out and the demand pattern of all these TBOs have been calculated on monthly basis. Forecast of TBO parts for the year 2008 has been obtained through the application of the mathematical model employing afore mentioned forecasting techniques. These forecasted quantities of TBO parts are then used to determine monthly requirements of TBO parts, on the basis of monthly seasonality index calculated by using historical monthly demand data for the period 2005-2007. This can enable the buyer to raise a timely demand and the supplier to arrange an in time delivery of the part in question. Also periods (months) of peak requirement can easily be identified.
Predictions for Driven 15 minute EnergyConsumption As this time series is in 15 minute resolution, the 36 hours forecasting covers the future 144 time steps. In Figure 36 (a), the 36 hours predictions of Persistence, ARIMA, and Univariate Stack LSTM models are presented for one particular sample. The ARIMA model has constant predictions after one particular time step. The outcome of Persistence is in the same behavior with true values but they do not overlap. The predictions of the Univariate Stack LSTM model tries to follow spikes and downs but they are not accurate enough. This behavior can be seen from Figure 36 (b) and (c) which shows the first and the last time step predictions of Univariate Stack LSTM from 500 samples. Figure (c) shows that the predictions for the last time step are far from the actual values. Figure 35 (b) shows the actual values and predictions for all 36 hours forecast points (with light color), only first time step (with blue color), and last time step (with green color) from all samples. It can be seen that the predictions for the first time steps form all samples are linear behavior with actual values but they are less confident. Generally, the scatter plot confirms the quality of the predictions are low especially for the further time steps. The RMSE errors from Univariate Stack LSTM for each 144 time steps are depicted in Figure 35 (c). As the time series has a pattern of spikes and falls, the fluctuations happen in the RMSE errors during the whole forecast period.
Hipert et al. (2001) reviewed and evaluated a collection of papers that reported the application of artificial neural networks to short time load forecasting, where a great number of them show successful implementation and useful results in the forecasting field. Ahmari Nejad et al. (2005) investigated electricity price forecasting methods in the energy market. An interesting idea in this paper was to analyze the electricity price environment in two parts; energy demand and customers’ behavior in the energy market. The forecasting tool in this research was ANN with a multi-layer perceptron architecture. The available number of data was 1036, where 836 pieces of data were used to train the network and the rest were used for testing. The results showed good performances of the employed ANN in electricity price forecasting.
Each of the legacy applications (WarnGen, GHG, and RiverPro) is complex in its own right, and the unification of them, while streamlining the process, requires deep knowledge of the complexities and un- derlying software architectures to retain functionality. While the study was application oriented, we believe that the methods and design recommendations can be generalized to the development of other weather forecast decision-support tools. When obtaining user feedback for the application, two things are advised. First, all stakeholders, including the designers and software developers, need to be involved to provide understanding of the unification and to guide the pro- cess to work around technical constraints. Second, a user-centered design should be used alongside training programs in order to promote effective and efficient
Burger & Moura (2015) forecasted electricity demand using building level con- sumption hourly series. Six hours ahead forecasts were predicted, with a combi- nation of different models (OLS with regularisation, support vector regression with radial basis function (RBF), decision tree regression and K-nearest neighbours). Af- ter the models were trained, the validation period was used to select the model to perform the final forecast (only one model from the set). Performance was reported to be better than with the use of individual models. The way this model was se- lected is innovative: a mechanism was developed to predict the performance of each forecasting model. Their strategy illustrates how the process of model selection can be enriched. However, it is more likely to work with heterogeneous models. With pools of models of the same type, their probabilities of having similar performance is higher and selection is likely to be less clear. Their other selection mechanism were based on cross-validation, taking into account the RMSE during validation period, and could be applied more easily to homogeneous pools of forecasting models.
The activity of commercialization of electricity is free but is subject to the attribution of a license by the competent administrative entity, Directorate General for Energy and Geology, DGEG, which clarifies the list of rights and duties in the perspective of a transparent exercise of the activity. In the course of their business, energy suppliers agents can freely purchase and sell electricity, with the right of access to the transmission and distribution networks, however, by paying regulated tariffs in order to have the right of access to the transmission and distribution networks. In Portugal, consumers can, under market conditions, freely choose their energy supplier with no additional costs. There are two types of energy supplier agents to operate in the national electricity market: energy suppliers in the regulated market and energy suppliers in the liberalized market. Regulated market energy supplier agents, also called last resort energy suppliers (LRES), aim to ensure the supply of electricity to all consumers with low voltage installations with contracted power equal to or less than 41.4kW (NLV), being subject to a system of regulated tariffs and prices and are usually the only ones to offer prices subject to the tariff regime fixed by ERSE, namely transitional tariffs. In continental Portugal, the commercialization of last resort in electricity is ensured by EDP Serviço Universal and by a group of small distributors that act locally.
affecting energyconsumption. This section concentrates on the statistical methods as our work belongs to that category. Traditional machine learning methods, such as ANNs and Support Vector Machines (SVM), have been used to forecast energyconsumption. The work by Jetcheva et al.  proposed an ANN model to forecast day-ahead building- level energy, with an ensemble approach to select model parameters. The use of ANNs for general load forecasting has been explored in several studies, for all three forecasting horizons: short, medium and long  . In comparison, the work by Naji et al.  predicted building energyconsumption by applying an Extreme Learning Machine method with the data regarding building material thickness and their thermal insulation capability. Several studies   have proposed ANN and SVM models for estimating energyconsumption and compared performance. Convolu- tional Neural Networks (CNN) have also been used for load forecasting , which showed to outperform SVM models while achieving comparable results to ANN and other deep learning methods . The work by Mocanu et al.  showed that newly developed stochastic models, Factored Conditional Restricted Boltzmann Machine, outperformed ANN, SVM, and classic RNNs for short-term prediction lengths. While the aforementioned works contribute to load forecasting in their respective ways, the presented work differs by focusing on S2S GRU and LSTM based models. These S2S models offer a stronger analysis in time series problems, since their internal hidden state is passed through a directed graph along a sequence. This allows S2S models to retain information in sequential data better than traditional ANNs, SVMs and CNNs.
For modeling, initially we have to bring data into Clemen-tine. Source node that has been named “Imported Data”, reads in data from external source (dataset that we have prepro-cessed) into Clementine. We used a “Partition” node to split the data into separate subsets or samples for training and evalua-tion stages of model building. This node has been set to use 90% of the data for the training and the remaining 10% for the testing. Partition node has “random seed” option. With this option, we can ensure a different sample (by selecting another subset of data records) will be generated each time the node is executed. By “Type” node, we tell Modeling node (“SVM” node) whether fields will be predictor fields or predicted fields. This node also describes data type (string, integer, real, date, time, or timestamp) in a given field. “SVM” node is a Modeling node. This sequence of operations is known as a data stream. When the stream is executed and model is built, the model nugget is created and added to the Models palette in the upper right corner of the application window. In accordance with Clementine software, to see modeling result we have to add the model nugget to the stream and attach the model nugget to the “Type” node, at the same point as the Modeling node. “Analysis” node helps to determine whether the model is acceptably accurate.
In a similar project, Ahmad, Mourshed and Rezgui (2017) used two ML algorithms, ANN and Random Forest (RF), to predict energy consumptions in buildings. In this work, a hotel data set was used to make consumption forecasts for the next hour. The algorithm with the best results was ANN. On the other hand, Seyedzadeh, Rahimian, Glesk and Roper (2018) compared four of the most well- known ML models for predicting the consumption of electrical energy in buildings. The models are: Artificial Neural Network, Support Vector Machine (SVM), Gaussian-based Regression and Clustering. With this research, it was concluded that all these models have their advantages and disadvantages, depending on the type of data and the input variables.
The building physics based models consider detailed information about the building and hence estimate energyconsumption with most clarity (Larsen and Nesbakken 2004). Furthermore, they do not depend upon historical values; however, the historical data can be used to calibrate the models. The major advantage of building physics based models are the modular structure of their algorithms. This means the users of this approach can easily modify the algorithms to suit particular needs (Kavgic et al. 2010). Building physics based models are the only methods that can fully esti- mate energyconsumption of a sector without historical energyconsumption information and evaluate the im- pact of new technologies (Swan and Ugursal 2009). The policies and initiatives such as CERT, CESP, ECO and Green Deal require practical decisions and are directed towards the level of the physical factors which influ- ence energy use. Bottom-up approaches and in par- ticular the building physics based models help in addressing these needs and hence is the preferred ap- proach in this study.
JOAN MANRESA is currently pursuing the degree in telecommunications engineering with the Polytechnic University of Catalonia and PDD, IESE. He began his professional career in the field of mobile telephony as a Radio Planning Engi- neer with AMENA, from 1999 to 2003. At the end of 2003, he joined Red Eléctrica de España, where he leads the deployment of projects related to energy management systems in the Balearic islands, SCADA, 24x7 operation systems mainte- nance, and grid telecommunications. In 2011, he joined the Smart Grids Department, where he develops smart grid’s initiatives related with the deployment of the electric vehicle in Spain and its consequences for the Span- ish TSO. Currently, he develops his activity in the Demand Side Management and smart Grids Department, where he also promotes and implements strate- gies for demand management and the Smart grids development, at the service of the operation of electrical system.
getting more and more complex. All these facts show the importance of having reliable predictive models that can be used for an accurate energyconsumptionforecasting [ 2 ]. Numerous contributions presenting computational intel- ligence (CI) based approaches for ECF have appeared in the last years [ 3 ]. Surveys can be found in [ 2 , 4 ]. Among the different CI methods, particular importance was given to neural networks [ 5 , 6 ], particle swarm optimization [ 7 ], support vector machines [ 8 ], simulated annealing [ 9 ], and genetic algorithms [ 10 ]. One of the outcomes of the European Energy Forecast conference [ 11 ] that took place in Brussels in February 2014 was the identification the following facts and open issues. (a) ECF will have a huge impact on economy in the near future. (b) ECF is a very difficult problem, since it is influenced by asynchronous and often unpredictable facts. (c) Several different geographical and time scales can be identified for ECF, which contribute to making the task even more complex. (d) The currently existing CI technologies Volume 2015, Article ID 971908, 8 pages
practice” technologies for each major manufacturing process. The Benchmarking and Energy Savings Tool is a process-based tool based on commercially available energy-efficiency technologies used anywhere in the world applicable to the flour industry. No actual flour facility with every single efficiency measure included in the benchmark will likely exist; however, the benchmark sets a reasonable standard by which to compare for plants striving to be the best. The energyconsumption of the benchmark facility differs due to differences in processing at a given flour facility. The tool accounts for most of these variables and allows the user to adapt the model to operational variables specific for the flour facility. BEST compares a facility to international or domestic best practice using an energy intensity index (EII) which is calculated based on the facility‟s energy intensity and the benchmark energy intensity. The EII is a measurement of the total production energy intensity of a flour facility compared to the benchmark energy intensity as in the following equation:
ABSTRACT Irregular human behaviors and univariate datasets remain as two main obstacles of data-driven energyconsumption predictions for individual households. In this study, a hybrid deep learning model is proposed combining an ensemble long short term memory (LSTM) neural network with the stationary wavelet transform (SWT) technique. The SWT alleviates the volatility and increases the data dimensions, which potentially help improve the LSTM forecasting accuracy. Moreover, the ensemble LSTM neural network further enhances the forecasting performance of the proposed method. Verification experiments were performed based on a real-world household energyconsumption dataset collected by the ‘UK-DALEąŕ project. The results show that, with a competitive training efficiency, the proposed method outperforms all compared state-of-art methods, including the persistent method, support vector regression (SVR), long short term memory (LSTM) neural network and convolutional neural network combining long short term memory (CNN-LSTM), with different step sizes at 5, 10, 20 and 30 minutes, using three error metrics.
maintenance of the system over its lifetime. The performance model presented in this paper provides energyconsumption and energy savings estimates with information of the current installation and equipment placed in the target building. In a very similar manner, the CM finds the cost of each deployment using information already available to the operator of the facility. The CM workflow is depicted in figure 5 (left). The SEAM4US system is composed by multiple assemblies of technologies, and each of them can work as a standalone solution; i.e. escalator control, ventilation control, lighting control, environmental monitoring,
A growing area of application of EE to buildings is the focus on existing homes, which dominate the current hous- ing stock. Since majority of existing homes is largely energy ineﬃcient, retroﬁtting them to improve their EE increases the likelihood of achieving the beneﬁts of EE including the potential for tremendous economic, health, social, and environmental gains ( Syal et al., 2014 ; USEPA 2010). Despite the fairly well established beneﬁts and opportunities of energy retroﬁtting of existing homes, its adoption and wide scale application has faced obstacles. One estimate of market penetration for home energy retro- ﬁt (HER) programs puts it at less than 2% ( Neme et al., 2012 ). The lack of information to homeowners in a format not easily understood and used was identiﬁed as barriers to energy retroﬁt adoption resulting in low uptake ( Syal et al., 2014 ). The research team embarked on a series of steps to investigate and solve the barrier issue. Previous works by the researchers that address diﬀerent aspects of the identi- ﬁed problem include:
Research in the field of SPK involving forecasting shows that the use of artificial intelligence-based method may offer a promising approach to produce better forecasting compared to traditional methods. This is indicated in the DecisionSupport System Pricing and Optimal Time Discounts on Retail Products that experienced deterioration (Yulherniwati, 2007). DSS uses an Artificial Neural Network (ANN) for forecasting demand and deterioration of the products that will be used to take optimal decisions discounts. These should include comparison with forecasting using multiple linear regression method which represents the traditional methods for forecasting causal, and it was found that using ANN forecasting more accurate than linear regression methods. Similarly, in the study conducted by Zhang, Fu, Ph.D., et al., [Fu, 2010] is a decisionsupport system capable of predicting the results of weekly tomatoes in greenhouses. Development of this system involves a set of techniques based on Artificial Intelligence, namely Neural Network, Genetic Algorithm (GA), and Grey System Theory (GST). ANN forecasting is done by using a set of input variables are optimized, selected from all available environmental parameters and measurable results. Reduction and optimization of input is done using either GA or GST and compared in terms of the performance of the ANN. Another study was conducted by Fathi Shorouq Eletter and Ghaleb Saad Yaseen namely Application of Artificial Neural networks for Credit Decisions in Jordan Commercial Banking System (Eletter & Yaseen, 2010).
DW is essentially different from traditional database systems. A database is a generic platform, which is built on a rigorous mathematical model to manage enterprise data, perform transactions, and complete related business. In contrast, the DW does not have strict data theory, which is more biased towards engineering. It is the establishment of the enterprise over a long period of time that money cannot buy. Its application object is a different level of managers, while its data source is a variety of data sources. It requires a large amount of historical data and summary data because the data in the library need not be modified to delete, which is mainly for large-scale query and analysis . The DW mainly analyzes the decision analysis data. Because of the particularity of the decision analysis data, the general DW has the following characteristics:
Figure 6 shows Brent crude oil prices predicted by our IDSS developed in this research. The prices are predicted for the next one (1) year into the future up to July 2014. A close observation of the plot indicated that the prices will continue to fluctuate, moving up and down as seen in the past (refer to Figure 5), signifying uncertain behavior, although this kind of behavior is expected in the crude oil market. The future prices indicate that our proposed model was able to generalize well and detect patterns through which the oil market might follow in the next one year to come. Several countries’ budgets, such as those of Saudi Arabia, Kuwait, Venezuela, Nigeria, Iran, Iraq, and Russia, heavily depend on expected revenue accrued from the sales of crude oil. Accurate prediction of the future prices is very critical to their national planning, policymaking, and development. Even non- oil-producing countries require knowledge of future prices of crude oil for strategic and industrial usage, which might drive their economic development, which in turn might improve economic standards. Suggestions from the predicted prices show that our model has the potential to be deployed by these countries as a complementary tool for supporting their decision-making processes. Inter- governmental organizations such as organization of petroleum exporting countries and organization for economic cooperation and development can use our projected oil prices for making decisions on oil production, consumption, supply, refinery stocks, etc., or for modifying their existing policies for the next one year.