Rare event simulation

Top PDF Rare event simulation:

A review of conditional rare event simulation for tail probabilities of heavy tailed random variables

A review of conditional rare event simulation for tail probabilities of heavy tailed random variables

From the last expression it follows that this estimator has variance 0. At first sight, this obser- vation might appear of no practical use as the implementation of the zero variance estimator is unfeasible since it requires the knowledge of the unknown probability of interest P (A). However, the zero variance distribution is of great theoretical interest as one can obtain partial informa- tion about it an serve as the ideal model when choosing an appropriate distribution; that is a distribution which is as “close” as possible to the zero variance distribution. Intuitively, we would like to choose a distribution in such way that the “important” event A is sampled with higher frequency with respect to the original distribution. However, there is a natural trade-off in the final value of the variance for the new estimator because if we increase the frequency of any subset it would also increase the values of its likelihood ratios. Therefore, the selection of the importance sampling requires a conscious analysis. In fact, a considerable amount of research effort in rare event simulation has been devoted to approximating the zero variance distribution. One of the most prominent cases is that of the Cross-Entropy method, which consists of an it- erative method which selects an ”optimal” distribution from a parametric family by minimizing the Kullback-Leibler distance with respect to the zero variance distribution. Another prominent case is that of Exponential Change of Measure or Exponential Twisting where the importance sampling distribution is selected from the so called exponential family generated by the original distribution. The later technique will be discussed in some detail in the following section. The second variance reduction technique that will be discussed here is Conditional Monte Carlo. This is perhaps the most general variance reduction technique and the one requiring more theo- retical effort. The intuitive idea behind it is that the variance of a given estimator can be reduced by extracting the variability coming from known information. If we add a little bit more of rigor to this idea we simply end up with conditional expectation. Let us consider again a random variable W defined on a probability space (Ω, F , P ), h an arbitrary function and G a simulatable sub-σ-algebra of F. Then

22 Read more

Efficient rare event simulation using DPR for multidimensional parameter spaces

Efficient rare event simulation using DPR for multidimensional parameter spaces

Splitting can be viewed as an alternative approach to rare event simulation and can potentially overcome some of the difficulties associated with IS. Splitting has been considered in early works [3] and has recently been used in the context of queueing networks [4], [5], [6], [7], [8], [9], [10], [11]. The main notion behind splitting is to partition the system states into subsets. When a subset is entered during system simulation, numerous retrials are initiated with the current subset as the starting or entry state of the system. Essentially, the main system trajectory is split into a number of subtrajectories. The number of subtrajectories spawned after visiting each subset is determined by the splitting or oversampling factors. The splitting method involves the selection of the subsets and oversampling factors, where the subsets are indexed such that the higher subsets correspond to the rare events. Splitting increases the relative frequency of the rare events by placing an emphasis on higher subsets. Efficient splitting can be achieved by equalizing the steady state probabilities of the subsets under splitting [5], [6], [9], [10]. A method of splitting called RESTART [4], [5] assumes the subsets are nested, and allows for only single subset transitions in a trajectory, and assumes integer oversampling factors. A more general technique we apply here, called direct probability redistribution (DPR) [10] requires the subsets to be disjoint (but not necessarily nested), and accounts for multiple subset transitions. The DPR technique can be used to estimate steady state probabilities in finite state Markovian or semi-Markovian systems, and has been successfully applied to performance estimation in mobile wireless systems [11] and in ATM systems [10], including ATM systems with internal flow control and systems with multiple hops, for which traditional IS techniques are difficult to implement. Existing splitting techniques typically use one system parameter to control the splitting (e.g., the queue length). Thus, the state space is partitioned into subsets according to a single splitting parameter that is often a directly observable system parameter. However, there are systems for which efficient splitting cannot be achieved without using multiple system parameters when constructing splitting subsets. These systems can be characterized in two ways: the rare events are related to multiple system parameters, or the typical trajectories the system takes to the rare events are defined through multiple system parameters. The time-reversal two node Jackson network with a two dimensional system space, for which single-parameter splitting failed to yield statistical estimates in [8], falls into the first category. We consider two illustrative example systems, which fall into either of the above categories.

20 Read more

Inference and rare event simulation for stopped Markov processes via reverse time sequential Monte Carlo

Inference and rare event simulation for stopped Markov processes via reverse time sequential Monte Carlo

While time-reversal has proved to be a successful tool for inference in population genetics (Grif- fiths and Tavar´ e, 1994; Stephens and Donnelly, 2000; De Iorio and Griffiths, 2004a,b; Birkner and Blath, 2008; Griffiths et al., 2008; Hobolth et al., 2008; Birkner et al., 2011; Koskela et al., 2015), and has also been used in rare event simulation (Frater et al., 1989; Anantharam et al., 1990; Frater et al., 1990; Shwartz and Weiss, 1993) and physics (Jarzynski, 2006), its use in combination with SMC has been limited to a few examples in population genetics (Chen et al., 2005; Jenkins, 2012; Koskela et al., 2015). It is in our opinion somewhat surprising that such an approach has not been combined with SMC for more general inference. Lin et al. (2010) use an auxiliary distribution operating in reverse time within SMC, but only to improve resampling, not to construct trajectories. Their time-reversal is also not constructed as an approximation of the reverse-time dynamics of their model. Jasra et al. (2014) use a coalescent-based example from population genetics to motivate their work, and highlight it as an example of more general SMC in reverse time, but still formulate their results forwards in time.

22 Read more

Rare event simulation for stochastic dynamics in continuous time

Rare event simulation for stochastic dynamics in continuous time

Large deviation simulation techniques based on classical ideas of evolutionary algorithms [1,2] have been proposed under the name of ‘cloning algorithms’ in [3] for discrete and in [4] for continuous time processes, in order to study rare events of dynamic observables of interacting lattice gases. This approach has subsequently been applied in a wide variety of contexts (see e.g. [5–8] and references therein), and more recently, the convergence properties of the algorithm have become a subject of interest. Analytical approaches so far are based on a branching process interpretation of the algorithm in discrete time [9], with limited and mostly numerical results in continuous time [10]. Systematic errors arise from the correlation

26 Read more

The theory of direct probability redistribution and its application to rare event simulation

The theory of direct probability redistribution and its application to rare event simulation

Many techniques for reducing the number of trials in Monte Carlo simulation using Importance Sampling (IS) have been proposed. Fundamentally, IS is based on the notion of modifying the underlying probability mass in such a way that the rare events occur much more frequently (see 1] for a good introduction). To correct for this modication, the results are weighted in a way that yields an estimate that is statistically unbiased. The goal of this eort is to have an estimate that achieves the same accuracy (variance) as the conventional Monte Carlo estimate, but with a much smaller number of trials.

18 Read more

Some Recent Results in Rare Event Estimation

Some Recent Results in Rare Event Estimation

Abstract. This article presents several state-of-the-art Monte Carlo methods for simulating and estimating rare events. A rare event occurs with a very small probability, but its occurrence is important enough to justify an accurate study. Rare event simulation calls for specific techniques to speed up standard Monte Carlo sampling, which requires unacceptably large sample sizes to observe the event a sufficient number of times. Among these variance reduction methods, the most prominent ones are Importance Sampling (IS) and Multilevel Splitting, also known as Subset Simulation. This paper offers some recent results on both aspects, motivated by theoretical issues as well as by applied problems. Résumé. Cet article propose un état de l’art de plusieurs méthodes Monte Carlo pour l’estimation d’événements rares. Un événement rare est par définition un événement de probabilité très faible, mais d’importance pratique cruciale, ce qui justifie une étude précise. La méthode Monte Carlo classique s’avérant prohibitivement coûteuse, il importe d’appliquer des techniques spécifiques pour leur estima- tion. Celles-ci se divisent en deux grandes catégories : échantillonnage préférentiel d’un côté, méthodes multi-niveaux de l’autre. Nous présentons ici quelques résultats récents dans ces domaines, motivés par des considérations tant pratiques que théoriques.

21 Read more

Unrealistic comparative optimism: an unsuccessful search for evidence of a genuinely motivational bias

Unrealistic comparative optimism: an unsuccessful search for evidence of a genuinely motivational bias

estimates from the same participants of the desirability and frequency of the events. Using this information, we showed that event desirability failed to predict any variance in the compara- tive optimism data once the influence of statistical artifacts was controlled for via event fre- quency. Indeed, the pattern in these data trended (weakly) towards pessimism. Studies 2 and 3 attempted to test unrealistic optimism in a more direct manner by providing participants with a fictional scenario that referred to an outcome occurring that would either affect them, or would affect others. There was no evidence that participants estimated the likelihood of a nega- tive event affecting them as less likely than one that only affected others. In Study 3, this result held despite participants generally estimating negative outcomes as more likely than neutral outcomes—the opposite of an optimism bias (replicating the severity effect observed in [20,22–24]. Finally, Studies 4 and 5 utilised the same 2x2 design as Study 3, but moved from fictional scenarios to real outcomes (in which participants—or others—could lose £5 they had been endowed with). Study 4 replicated the results of Study 3. Study 5 failed to replicate the severity effect, but once more there was no evidence for a comparative optimism effect. Studies 2–5 provided the underlying likelihood information to participants in a variety of different ways—some more perceptual than others—thus demonstrating that our results generalize beyond a single paradigm.

36 Read more

A beginner's guide to systems simulation in immunology

A beginner's guide to systems simulation in immunology

Sauro et al. [5] mentions that the construction of models of biochemical and cellular behaviour has been traditionally carried out through a bottom-up ap- proach, which combines laboratory data and knowledge of a reaction network to produce a dynamic model. This process, however, requires the reaction network to be known and the possibility to carry out the various laboratory experiments. Furthermore, the modelling relies on the fact that data from laboratory experi- mentation matches real-world phenomena, which is not always correct. Samples can be compromised during collection or during the experimentation process. In addition, although bottom-up approaches are very useful for immunology, there are circumstances where they can not be applied. Examples include when the reaction network process is not well understood, or laboratory experiments are known not to be able to reproduce the real-world reactions (for instance, given the environmental differences such as temperature). In addition, the authors argue that “top-down modelling strategies are closer to the spirit of systems bi- ology exactly because they make use of systems-level data, rather than having originated from a more reductionist approach of molecular purification”. The conclusion reached in their study was that there is no best approach as it is preferable to view them as complementary. Their ideas match other studies in biology and other research areas, which investigate the merits of each approach and their combination for simulation [16]. To our knowledge, the most com- mon system simulation approaches for immunology are Monte Carlo simulation, system dynamics, discrete-event simulation, cellular automata and agent-based simulation.

14 Read more

A rare event approach to high dimensional approximate Bayesian computation

A rare event approach to high dimensional approximate Bayesian computation

Discrete data RE-SMC can struggle if there is a discrete data variable x ∗ . It can be hard for SMC to move from accepting a set of latent variables A to another A in which the range of possible x ∗ values is smaller, because Pr(x ∈ A |x ∈ A, θ) may be very small. The issue is particularly obvious for ADAPT-RE-SMC as the sequence may fail to move below some threshold for a large number of iterations. For FIXED- RE-SMC, it would instead result in high-variance likelihood estimates. In Sect. 5.2, this problem occurs for ν, the number of removals. There we adopt an application-specific solution by introducing continuous latent variables (pressure thresh- olds) into the distance function (5). It would be useful to investigate more general solutions from the rare event lit- erature (e.g., Walter 2015). Despite these potential issues, RE-ABC can perform well with discrete data in practice, for example in the binned data model of Sect. 5.3.

16 Read more

Reducing the rare event: lessons from the implementation of a ventilator bundle

Reducing the rare event: lessons from the implementation of a ventilator bundle

The ventilator-associated event (VAE) is a potentially avoidable complication of mechanical ventilation (MV) associated with poor outcomes. Although rare, VAEs and other nosocomial events are frequently targeted for quality improvement efforts consistent with the creed to ‘do no harm’. In October 2016, VA Greater Los Angeles (GLA) was in the lowest-performing decile of VA medical centres on a composite measure of quality, owing to GLA’s relatively high VAE rate. To decrease VAEs, we sought to reduce average MV duration of patients with acute respiratory failure to less than 3 days by 1 July 2017. In our first intervention (period 1), intensive care unit (ICU) attending physicians trained residents to use an existing ventilator bundle order set; in our second intervention (period 2), we updated the order set to streamline order entry and incorporate new nurse-driven and respiratory therapist (RT)-driven spontaneous awakening trial (SAT) and spontaneous breathing trial (SBT) protocols. In period 1, the proportion of eligible patients with SAT and SBT orders increased from 29.9% and 51.2% to 67.4% and 72.6%, respectively, with sustained improvements through December 2017. Mean MV duration decreased from 7.2 days at baseline to 5.5 days in period 1 and 4.7 days in period 2; statistical process control charts revealed no significant differences, but the difference between baseline and period 2 MV duration was statistically significant at p=0.049. Bedside audits showed RTs consistently performed indicated SBTs, but there were missed opportunities for SATs due to ICU staff concerns about the SAT protocol. The rarity of VAEs, small population of ventilated patients and infrequent use of sedative infusions at GLA may have decreased the opportunity to achieve staff acceptance and use of the SAT protocol. Quality improvement teams should consider frequency of targeted outcomes when planning interventions; rare events pose challenges in implementation and evaluation of change.

8 Read more

emergency department, discrete-event simulation, hybrid simulation, system dynamics

emergency department, discrete-event simulation, hybrid simulation, system dynamics

From literature, it is apparent that waiting time has become a frequent topic of investigation and often becomes one of the performance measures in ED modeling. For instance, [5] adopted a queuing theory approach to study the effect of waiting times to ED. However, in order to capture the complex behavior and stochastic nature of ED, many have applied computer simulation to address the waiting times problems [6], [7]. The developed models enabled ED staff to understand and review ED delivery process in order to determine effective strategy to resolve patients waiting time. These studies however focus solely on factors within the ED and neglected other external factors that affect ED performance.

5 Read more

The long-term development of avalanche risk in settlements considering the temporal variability of damage potential

The long-term development of avalanche risk in settlements considering the temporal variability of damage potential

Changes in the extent of the avalanche accumulation areas were studied using the numerical avalanche model AVAL- 1D (e.g. Christen et al., 2002a) in combination with the avalanche incident cadastre of former events. AVAL-1D is a one-dimensional avalanche dynamics program for the prediction of run-out distances, flow velocities and impact pressure of both dense flow and powder snow avalanches. The dense flow simulation is based on a Voellmy-fluid flow law, while the powder snow simulation follows Norem’s de- scription of powder flow avalanche formation and structure (Norem, 1995). The avalanche calculation is based on a dry- Coulomb type friction (µ) and a velocity squared friction (ξ ) and was carried out following the guidelines given in the manual (Christen et al., 2002b). The fracture depths were ob- tained using Gumbel’s extreme value statistics on the possi- ble maximum new snow heights within three days. The input parameters were calibrated on the basis of the legal hazard map.

9 Read more

User interfaces and discrete event simulation models

User interfaces and discrete event simulation models

Simulation systems are a particular kind of decision support system. Their nature is, as for any modelling process, to understand and structure a problem that is constantly changing to a varying degree, both with respect to changes in the outside world, and with respect to the perceptions of the problem owners (Paul, 1991). That change has to be apparent in the working system. Simulation always involves experimentation, usually on a computer-based model of some system. The simulation model can be used to compare alternative systems, or the effect of changing a decision variable; to predict what would happen to the state of the system at some future point in time; and to investigate how the system would behave and react to normal and abnormal stimuli (Pidd, 1992a). The model is used as a vehicle for experimentation where trial and error and learning methods of experimentation help support management decision making. Such modelling systems have the desired characteristics of interactive user input, algorithmic processing, and intricate and flexible output. They typically involve non-linear development, which is interruptable and restartable to meet changing specifications in the light of improved understanding during development. Because of the research nature of modelling, there is an active need for the users or problem owners to participate in the modelling process. Requirements and specification are therefore particularly subject to change as an understanding of the problem being modelled evolves, for both the users and the developers. This can lead to severe difficulties in modelling implementation because of the changing basis of the model. One aim in these situations might be to determine general principles concerning the flexibility of applications to meet specifications and specification changes, and customisability.

276 Read more

An investigation on test driven discrete event simulation

An investigation on test driven discrete event simulation

Moreover, prior to constructing the model, no attempts were made to acquire information regarding the probability distribution of the arrival of the patients, the attendance duration and etc. This does not mean that no such distribution was used in the model constructed above. Such distributions exist and AnyLogic’s default distributions (along with their default parameters) were used without any change. The reason for such an approach stems from our test first method. In TDD, minimal coding is essential to the success of the approach. Making an analogy, we decided to extend that idea for the Test Driven Simulation by employing minimal data/knowledge. The purpose is to check if the available test can help to expose the missing data/knowledge of the system. As it happens, it does indeed. It is interesting to point out that this test was actually the second test written for the system and when it was written the only components of the system were the source and the sink. It is also interesting to emphasize the fact that when one tries to resolve the issue, model modification and model parameters (instead of coding) is required to make the test pass. In other words, instead of writing minimal code to make the test pass, providing an additional, though minimal, knowledge about the system is essential to make the test pass. Evidently, it is possible to take our approach to a higher level of abstraction where the unit tests contribute to the specification of model parameters. If the simulation model is given sufficient time, it forces the designer to introduce some additional parameters (minimal knowledge to make the test pass) with which the simulation passes the test.

11 Read more

Accounting for Input Uncertainty in Discrete-Event Simulation

Accounting for Input Uncertainty in Discrete-Event Simulation

distribution, we should compute the exact form of its density. This requires some high-dimensional numerical integrations or asymptotic approximations (see Section 4.1.3). These computational difficulties limited the use of Bayesian methods for more than two centuries. In the last decade, Markov Chain Monte Carlo (MCMC) meth- ods are being increasingly used for dealing with such problems. The basic philosophy behindMCMC is to take a Bayesian approach andcarry out the necessary numerical integrations using Monte Carlo simulation (see Gilks et al. (1996) for background). Insteadof calculating exact or approximate estimates of the posterior density, this computer-intensive technique generates a stream of simulatedvalues from the poste- rior distribution of any parameter or quantity of interest. These computations can be easily coded in the BUGS statistical package (Spiegelhalter et al., 1996) using a small set of BUGS commands. BUGS is a software that carries out Bayesian inference using a MCMC technique known as Gibbs sampling (GelfandandSmith, 1990).

104 Read more

New techniques for pile-up simulation in ATLAS

New techniques for pile-up simulation in ATLAS

The Transition Radiation Tracker (TRT) energy deposits in drift tubes from pile-up and the hard-scatter event cannot be directly added together, because the information is stored with coarser granularity in the pre-mixed RDOs. TRT high-threshold drift circles are conse- quently incorrect. Besides tracking, they are also used for particle identification. The solution would be to store the energy and timing information in the RDOs, significantly increasing the file size. Instead, a detector occupancy based correction to the high-threshold hits in the TRT was implemented, tuned separately for electrons and non-electrons (Fig. 6), because elec- trons have higher probability to exceed the high threshold than other particles [7]. Although a perfect match cannot be achieved, TRT tracks have a lower weight in the tracks fits at high pile-up. Remaining differences then only affect the fit quality but not the fit results.

7 Read more

Mitigation And Control Of Defeating Jammers Using P-1 Factorization

Mitigation And Control Of Defeating Jammers Using P-1 Factorization

to be created. This trace file is usually generated by NS. It contains topology information, e.g. nodes and links, as well as packet traces .during a simulation, the user can produce topology configurations, layout information and packet traces using tracing events in NS.Once the trace file is generated, NAM can be used to animate it. Upon startup, NAM will read the trace file, create topology, pop up a window, do layout if necessary and then pause at time 0. Through its user interface, NAM provides control over many aspects of animation. In Figure a screenshot of a NAM window is shown, where the most important function are explained. Although the NAM software contains bugs, as do the NS software, it works fine most of the times and times and causes only little trouble. NAM is an excellent first step to check.

13 Read more

Hybrid Simulation for Sustainability of Decision Making

Hybrid Simulation for Sustainability of Decision Making

There are two types of variables, which are quantitative and qualitative. Quantitative variables normally are discrete variables and can be represents easily, such as patient ad- mission, total staffs etcetera. On the other hand, qualitative variables are normally continuous variables that do not have one stop point, such as level of stress, recovery level etcet- era. Zulkepli [14] argued that qualitative variables is more suitable to be captured using DES whilst quantitative vari- ables are suitable to be captured using SD. Although these variables are modelled using different techniques, it can be linked together to produce more reliable output, before any decisions can be taken. To develop a simulation of health- care system, there are many variables involves which can be considered as quantitative and qualitative. Some variables from quantitative will influence the qualitative variables and vice versa. Based on the previous literature, we develop an examples of variables that “influencing” and “influenced by”. These variables is then been divided into which appli- cable approach to abduction these variables.

5 Read more

Analysis and simulation of rare events for SPDEs*

Analysis and simulation of rare events for SPDEs*

and where extrema are taken over smooth trajectories. The left-hand side of (6) is the so-called quasi- potential function defined for any (x, y) ∈ R d as the minimal cost of forcing the system to go from x to y in an indeterminate time. For gradient systems, the latter formula shows that it is the lowest energy barrier that needs to be overcome in order to reach y from x. The quasi-potential yields a rationale to compute rare events related to exit times, since for any δ > 0, we have the general formula

21 Read more

Numerical simulation of a rare winter hailstorm event over Delhi, India on 17 January 2013

Numerical simulation of a rare winter hailstorm event over Delhi, India on 17 January 2013

Abstract. This study analyzes the cause of the rare occur- rence of a winter hailstorm over New Delhi/NCR (National Capital Region), India. The absence of increased surface temperature or low level of moisture incursion during win- ter cannot generate the deep convection required for sustain- ing a hailstorm. Consequently, NCR shows very few cases of hailstorms in the months of December-January-February, making the winter hail formation a question of interest. For this study, a recent winter hailstorm event on 17 January 2013 (16:00–18:00 UTC) occurring over NCR is investigated. The storm is simulated using the Weather Research and Fore- casting (WRF) model with the Goddard Cumulus Ensem- ble (GCE) microphysics scheme with two different options: hail and graupel. The aim of the study is to understand and describe the cause of hailstorm event during over NCR with a comparative analysis of the two options of GCE micro- physics. Upon evaluating the model simulations, it is ob- served that the hail option shows a more similar precipi- tation intensity with the Tropical Rainfall Measuring Mis- sion (TRMM) observation than the graupel option does, and it is able to simulate hail precipitation. Using the model- simulated output with the hail option; detailed investigation on understanding the dynamics of hailstorm is performed. The analysis based on a numerical simulation suggests that the deep instability in the atmospheric column led to the for- mation of hailstones as the cloud formation reached up to the glaciated zone promoting ice nucleation. In winters, such in- stability conditions rarely form due to low level available po- tential energy and moisture incursion along with upper level baroclinic instability due to the presence of a western distur- bance (WD). Such rare positioning is found to be lowering

14 Read more

Show all 10000 documents...