cast models computed in near real time and thus rely only on the status of the landslide as defined by the measurements currently available, requiring neither a calibration period nor back analyses. It is worth mentioning that our method has been developed to achieve reliable short-term failure fore- casts and is not intended for medium- and long-term pre- dictions of the ToF. On the contrary, we aim to provide a supporting toolbox to manage EWSs in critical situations, especially when predefined early warning thresholds are ex- ceeded. EWS managers can benefit from the additional in- formation provided by the FFM because when the reliabil- ity of the forecast is high and a landslide failure thus more likely, authorities can be informed in advance (in an auto- matic and/or semi-automatic manner) and thus have the time to take countermeasures. The final interpretation of landslide failure potential must be provided by experienced users who have a deep knowledge of landslide phenomena, have access to additional data on landslide status, and are conscious of the limitations of FFM. Thus, the FFM information can be better interpreted by taking carefully into account additional evidence from other data sources, depending on the specific context. Further investigation on the reliability and accuracy of the method presented herein will be performed, mainly by considering different data sources as well as performing tests on a larger number of case studies.
This study was part of the AMPHORE (Application des Methodologies de Previsions Hydrometeorologiques Orien- tees aux Risques Environnementaux) European Union re- search project whose objective regards the forecasting and the prevention of natural hazards, with particular reference to risks coming from severe hydro-meteorological events. The main goal was the optimisation of the existing warning sys- tems in the Western Mediterranean Basin (Medocc area) by the implementation and the usage of probabilistic and sta- tistical methods in which different scenarios are embedded together to improve the Quantitative Precipitation Forecast (QPF).
13 Read more
applied to the region of Flanders (Schelfaut et al., 2011 and CIW, 2011). In this work, the different steps are anal- ysed under the FREEMAN project (Flood Resilience En- hancement and Management). The European Flood Aware- ness System (EFAS) is also another example of an EWS developed under the sponsorship of the European Commis- sion. This system provides daily streamflow forecasts for Eu- rope starting from up to 10 d weather forecasts (medium- term forecast). More details of this model can be shown in Thielen et al. (2009), Pappenberger et al. (2011), Cloke et al. (2013) and Alfieri et al. (2014). Using this model Dot- tori et al. (2017) develop a methodology to adapt EFAS to real-time forecasting. Demerit et al. (2013) analyse the problems derived from the use of the early warning sys- tem for medium- and long-term flood forecasts, mainly the dissemination of the information to people potentially af- fected by these events. They reveal that flood forecasters usually wait for the confirmation from local institutions (hy- drologic confederations) instead of acting following the in- formation provided by the early warning systems. These lo- cal systems are focused on short-term forecasts (0 to 48 h) that are more suitable to evacuation than damage mitigation. Some examples of these short-term local systems focused on river floods are the River Forecast Centers (https://water. weather.gov/ahps/rfc/rfc.php, last access: September 2019) in the United States of America or “Sistema de Ayuda a la Decisión” (http://www.chebro.es/contenido.visualizar.do? idContenido=12789&idMenu=2902, last access: Novem- ber 2019) developed by the Hydrographic Confederation of the Ebro River (Spain). In Europe Meteoalarm (http://www. meteoalarm.eu/?lang=en_UK, last access: September 2019) provides advice on exceptional weather events including floods with a temporal window of 48 h. There are mainly two kind of floods derived from precipitation events: flash floods and river floods. On the one hand, flash floods are charac- terised by a time delay, from the peak precipitation time to the peak of flood, from 3 to 6 h. These floods are usually reg- istered in a dry climate and rocky terrain due to the lack of vegetation to filtrate the precipitation into the ground. These kinds of floods have a very high associated level of risk due to their velocity of propagation. On the other hand, river floods are generally registered in larger rivers in areas with a wet climate and the delay time is greater than 6 h. The conse- quences associated with river floods can also be dramatic for the people and their property. This makes it necessary to de- velop an EWS to improve the security of the areas exposed to these events. The area of study analysed in this work is mainly affected by river floods.
13 Read more
To evaluate the classification tool for the whole event, the weighting procedure is applied for every EPS runoff forecast using the 25% and 75% quantile as boundaries of the uncertainty range. To have an appropriate comparison with Figure 13, for the restricted uncertainty range every forecast is also superpositioned and the outermost Q25/Q75-value for every simulated time step was chosen as the boundary for the dark grey area in Figure 16. Due to the proposed method, the peak for the 2 June 2013 does not occur and several other peaks are damped while the shape of the uncertainty band remains similar. This means that the required level of uncertainty is still taken into account, while the peak and successive overestimated probability of flood occurrence are eliminated.
29 Read more
the requirements for example by floor prevention department needing to get the display of the rain and other meteorological information of many towns in the same administrative region), and moreover, the interface expansibility is poor, for example, as shown by the product below, it cannot realize the addition of visibility, humidity, highest temperature and accumulated rainfall; besides, only the weather forecast and warning information of one industry can be displayed, so, it is not so intuitive to browse the meteorological information in whole.
The Brier skill score (BSS) is an ideal measure to compare the performance of probabilistic and deterministic forecasts (Wilks, 2006). The BSS is based on the Brier Score (BS), which describes the quality of the forecast system in predict- ing the probability to exceed a predefined threshold by mea- suring the squared probability error. A perfect forecast sys- tem would have a BS of zero. In order to compare the differ- ent forecast systems to each other, we made use of the BSS, which sets the skill of the different forecasts in relation to a reference forecast. A perfect forecast has a BSS of 1, whereas forecasts worse than the reference forecast have a skill below 0. In our study, the reference forecast was the probability of exceedance for the predefined thresholds based on the sam- ple climatology. The sample incorporated all discharge ob- servations from hours covered by one or more NORA fore- casts. This resulted in a sample size of 1788 h. The thresh- olds analysed in our study correspond to the 0.5, 0.6, 0.7, 0.8, 0.9 and 0.95 quantile of the sample climatology, which we refer to as q50, q60, q70, q80, q90 and q95. As the sam- ple is restricted to the hours covered by NORA, the actual values of the thresholds quantiles are higher than the ones used in our previous study (Liechti et al., 2013). To esti- mate the uncertainty of the BSS values, we applied the boot- strapping method (Efron, 1992). Thus 500 random samples of forecast-observation pairs were drawn with replacement from the 1389 h belonging to each lead time leading to the confidence limits (95 %) shown in Fig. 5.
17 Read more
Therefore, a statistical approach to forecasting rip currents has been adopted by NOAA’s coastal NWS WFO’s. Several of the factors that have been suggested to influence rip current formation elsewhere are represented in a forecasting worksheet (see appendix A). The NWS Morehead City (MHX) forecast sheet is essentially based on the original East Central Florida Lushine Rip Current Scale (ECFL LURCS), with some adjustments made for coastline orientation that were adopted by adjacent NWS WFO’s (Lascody 1998). The sheet accounts for: wave height, wave period, wave direction, wind speed, wind direction, and tidal amplitude. These parameters are assigned a numeric weighting based on certain ranges of values, the weightings are then totaled to reveal the rip current risk for the day. Values greater than 5.5 denote a high risk of rip currents, implying that wave and/or wind conditions are favorable for the formation of dangerous rip currents. Values between 4 and 5.5 signify moderate rip current risk, implying that wave and/or wind conditions support the formation of strong and frequent rip currents. A low risk of rip currents is triggered for values below 4, and suggests, that wave and/or wind conditions do not support the formation of strong rip currents, including a qualifying statement warning, that rips currents may still occur especially in the vicinity of hard surf zone structures (NWS MHX 2007).
102 Read more
FPERS powered by GEE provided an innovative platform that enabled us to integrate a huge amount of geospatial data, resolving the data accessibility issue. From a practical point of view, however, manipulating more than five layers simultaneously is not very helpful for decision making. Instead, referring to the right data at the right time is much more important, particularly for timely responses to an urgent event. Determining the right data and the right time requires domain knowledge for different types of disasters. Therefore, in this work, a set of SOPs was proposed for the pre-, during-, and post-flood stages to ensure that all important data were properly displayed and reviewed as time progressed. At the pre-flood stage, the huge amount of geospatial data were categorized as typhoon forecast and archive, disaster prevention and warning, disaster events and analysis, and basic data and layer. At the during-flood stage, three strategies were implemented to facilitate access to real-time data, present key information, make sound recommendations, and support decision-making. At the post-flood stage, various remote sensing imageries were integrated, including Formosat-2 optical imagery to detect and monitor barrier lakes, the synthetic aperture radar imagery to derive an inundation map, and the high-spatial-resolution photographs taken by unmanned aerial vehicles to evaluate the damage to river channels and structures. The prevention and urgent response experiences gained from FPERS can be applied to other types of urgent events, such as sediment-related or earthquake-related disasters.
19 Read more
This unit will issue early and real time warnings to stakeholders, including each river flow station. This will consist of the joint or individual directives to the agencies related to the river flow forecast system and their impact like water and power development authority (WAPDA), Pakistan Meteorological Department (PMD), National flood forecasting bureau (NFFB), Federal Flood Commission (FFD), Indus River System Authority (IRSA), National Disaster Management Commission (NDMC), etc. Moreover, some work also related to the EWS like, Hazard detection, flood hazard Risk Assessment, vulnerability analysis to more sensitive areas. EWS need some deep insight as formulation and dissemination of warning messages and the community response.
Disasters, whether natural or man-made, usually tend to cause one or more sec- ondary disasters, a phenomenon referred to as the incident chain . For exam- ple, rainstorms may cause floods, and floods could lead to landslides, debris flows, building collapses and other secondary disasters. In order to predict both primary and secondary disasters at the same time, an incident chain model for rainstorms is developed. According to incident chain model for rainstorms, 3D GIS data in specific areas and meteorological forecast information, the occur- rence probability, development trend, affected area and duration of the flood caused by the rainstorms can be predicted. And then the flood predicted results will be used as the initial conditions for the calculation of other secondary disas- ters such as landslides, debris flows, building collapses and so on, i.e. the predic- tion results of the secondary disasters in the previous stage are taken as the ini- tial conditions of the secondary disasters in the next stage. In this way, iterative calculation is carried out to predict all possible secondary disasters and their ef- fects. Disaster impact prediction based on incident chain model is shown in Figure 1.
Irrespective of the combination weighting scheme, the other line of re- search in forecast combination has looked at the combination operator. Equa- tion (1), when no constant is incorporated, is a weighted average. Agnew (1985) found that the median outperformed the mean, and recommended its use. Barrow and Kourentzes (2016) found the median performing best amongst a large variety of alternative combination schemes, as it was robust against outlying forecasts. Alternatively, one can employ the trimmed mean (Elliott and Timmermann, 2016). On the other hand, Stock and Watson (2004) found support for the mean, while McNees (1992) found no signifi- cant differences between the two. Kourentzes et al. (2014a) compared the use of mean, median and mode of forecasts, as estimated using kernel density, and found that the mean required a substantial number of forecasts to con- verge to a stable good forecasting performance, while the median converged very fast. When an adequate number of forecasts was available for the ker- nel density estimator (around 30) then the mode performed best. However, weighting schemes have not been explored for such combination operators, although such extensions are simple.
50 Read more
The results of Figure 2 show, in NCEP ensemble forecast product, on average, the predicted jump level increases with the increase of the prediction time, this result is consistent with the forecast practice, and that is longer than the limitation of prediction results to compare the forecast results which are more likely to have on the limitation of short forecast jumps. At the same time, also can be found from Figure 2, for the NCEP ensemble forecast product, the average time of ensemble average forecast is usually smaller than that of the time average of the corresponding control forecast. When the forecast time is relatively long time (more than 144 h), this phenomenon becomes more obvious; when the forecast period is not less than 240 h, the average time of the ensem- ble average forecast is only the time average of the control forecast which corresponds to that of the jump index 25% - 50%; However, when the forecast time is relatively short, average time jumping index forecast of ensemble average forecast and control forecast is not very big difference. The above phenomenon can preliminary showed that, don’t consider a few special cases, forecast jump can be avoided in a long time by ensemble average forecasting. With the extension of forecasting aging, the forecast jump index of ensemble average is relatively slow. To sum up, the NCEP collection av- erage forecast than its corresponding control has better prediction consistency.
10 Read more
When the forecast bias is set to fb= − 0.16, reliability im- proves when the skill decreases, i.e. the reliability term of the Brier score is numerically reduced. This improvement is larger when the parameters fs and fb are comparable, i.e. when forecast bias and forecast error have the same ampli- tude, and it is emphasized when the forecast bias increases (not shown). This rather unexpected feature can be explained by the fact that increasing the forecast skill has the primary effect of decreasing the ensemble spread, provided the rela- tionship between spread and skill exists. When the forecast skill is high, the spread tends to be small and the ensemble distribution is sharp. The shift of the ensemble distribution with respect to the verification, attributable to the forecast bias, leads to a strong proportion of outliers. This induces a systematic underestimation of the forecast probability of an infrequent event (such as the event considered in the present study). When the forecast skill is lower, the spread tends to be large and the ensemble distribution is flatter. The propor- tion of outliers due to the forecast bias (i.e. related to the shift of the distribution) is thus reduced, so that the underestima- tion of the forecast probability is limited.
11 Read more
In this paper, a novel DDoS attack system is proposed to detect DDoS attacks in a big data environment based on Spark framework, which includes 3 main algorithms. Based on information entropy, the first one can effectively warn all kinds of DDoS attacks in advance according to the information entropy change of data stream source IP address and destination IP address; With the help of designed dynamic sampling K-Means algorithm, this new detection system improves the attack detection accuracy effectively; Through dynamic sampling K-Means parallelization algorithm, which can quickly and effectively detect a variety of DDoS attacks in big data environment.The experimental results show that good warning results are obtained and the detection accuracy and speed are obviously superior than traditional DDoS attack detection methods.
20 Read more
Simultaneously, it appeared to be obvious and necessary to take into account, when working on the perception of warning signals, that the process of perceiving hazard signals takes place in a specific environment. It is based on issues derived from historical development, technological and social structures of storage and transfer of signs that make up the media, where creators of modern media theory were analysed, including: Herbert Marshall Mcluhan, Derrick de Kerckhove, Paul Virilio, Harold A. Innis, Friedrich A. Kittler, Erving Goffman, John Urry . Other authors in
17 Read more
Abstract. Hindcasts based on the extended streamflow pre- diction (ESP) approach are carried out in a typical rainfall- dominated basin in China, aiming to examine the roles of initial conditions (IC), future atmospheric forcing (FC) and hydrologic model uncertainty (MU) in streamflow forecast skill. The combined effects of IC and FC are explored within the framework of a forecast window. By implementing vir- tual numerical simulations without the consideration of MU, it is found that the dominance of IC can last up to 90 days in the dry season, while its impact gives way to FC for lead times exceeding 30 days in the wet season. The combined effects of IC and FC on the forecast skill are further investi- gated by proposing a dimensionless parameter (β) that rep- resents the ratio of the total amount of initial water storage and the incoming rainfall. The forecast skill increases expo- nentially with β , and varies greatly in different forecast win- dows. Moreover, the influence of MU on forecast skill is ex- amined by focusing on the uncertainty of model parameters. Two different hydrologic model calibration strategies are car- ried out. The results indicate that the uncertainty of model pa- rameters exhibits a more significant influence on the forecast skill in the dry season than in the wet season. The ESP ap- proach is more skillful in monthly streamflow forecast during the transition period from wet to dry than otherwise. For the transition period from dry to wet, the low skill of the fore- casts could be attributed to the combined effects of IC and FC, but less to the biases in the hydrologic model parame- ters. For the forecasts in the dry season, the skill of the ESP approach is heavily dependent on the strategy of the model calibration.
12 Read more
those variables also follows a multivariate normal distribu- tion. Thus the many instances of missing data in streamflow records, as described in Sect. 2.2, are easily handled. Sev- eral of the forecast locations experience ephemeral and in- termittent flows, which can result in a probability mass for zero flows. This problem is handled in the BJP modelling approach by treating zero flows as censored data, meaning that the observations of zero flow are treated as being of unknown precise value equal to or below zero. Uncertainty in the model parameters due to the short data records is handled by inferring parameters using Markov chain Monte Carlo methods (MCMC). Probabilistic (ensemble) forecasts are produced using conditional multivariate normal distribu- tion equations. When predictor–predictand relationships are weak, the BJP modelling approach is designed to produce reliable forecasts that approximate climatology (i.e. the fre- quency distribution of historically observed streamflow).
12 Read more
By looking specifically at the results of the temporal and spatial content analyses, it becomes apparent that there are many differences that lie between the two subsets of data for each analysis. For the temporal analysis, there are discrepancies that exist between within the warning time and outside the warning time, which may be in part to the two times frames posing a different level of inherent risk to Twitter users. Within the warning time, there is more immediate danger and the information regarding safety, such as location and time, are what people are most concerned with. Outside of the warning time, users potentially have more time to reflect and tweet about their experience, which reflects the fact that tweets in this time frame tend to be more personal. When comparing inside the warning polygon compared to outside the warning polygon, similar trends emerge with the wording inside the warning polygon being more specific on the threat, location, and time compared to outside the warning polygon that typically offers sheltering advice and personal opinions about the storm.
111 Read more
Combining forecast using regression techniques had been suggested by Crane and Crotty . Granger and Ramanathan  have pointed out that the conventional forecast combination methods could be viewed within a regression framework. Meanwhile, Wilson and Keating  suggested that equal weight method can be referred to as a simple averaging combination method or unweighted mean. This method yields the average value of forecasting of the individual forecast that involves as a result. General formula for combination forecast method is shown as below .
All the models we explored in our research are available in APO. However, the parameters for most models must be filled in and cannot be optimized by the system. If we want to use all the models, we must optimize the parameters in the advanced forecast tool in Microsoft Excel, which was built for this research project. This will take a lot of effort on the part of FrieslandCampina. Therefore, we propose the following; APO has the functionality to optimize the parameters for the simple exponential smoothing model and the double exponential smoothing model. We advise using one of these models for every article. This will, however, cause a decrease in forecast accuracy. When we compare, per article, the accuracy of the optimum model, with the accuracy of the best of the simple or double exponential smoothing model, (Results in Appendix N) we see that, on average, forecast accuracy decreases by 1,4% when the best of simple or double exponential smoothing is used. 77% of all products shows a decrease of less than 2% in accuracy performance when we use the best of the simple or double exponential smoothing model. Analyzing the difference in bias between the models, Appendix O shows that, when the same simple or double exponential smoothing model from Appendix N is used, there is a 3,27% decline in bias performance. 73% of all products show a decline of less than 2% in bias performance when the best of the simple or double exponential smoothing model is used.
95 Read more