sequential importance sampling

Top PDF sequential importance sampling:

Incorporating Model Uncertainty into the Sequential Importance Sampling Framework using a Model Averaging Approach, or Trans Dimensional Sequential Importance Sampling

Incorporating Model Uncertainty into the Sequential Importance Sampling Framework using a Model Averaging Approach, or Trans Dimensional Sequential Importance Sampling

A sequential Bayesian Monte Carlo approach is proposed in which model space can be explored during the Sequential Importance Sampling (SIS, a.k.a. Particle Filtering) fitting process. The algorithm allows model space to be explored while filtering forwards through time and takes a similar approach to Reversible Jump Markov Chain Monte Carlo (RJMCMC) strategies, whereby parameters jump into and out of the model structure. Possible efficiency gains of the new Trans-Dimensional SIS routine are discussed and the approach is considered most beneficial when the exploration of large model space in the SIS framework is desired.
Show more

14 Read more

Sequential importance sampling for bipartite graphs with applications to likelihood-based inference

Sequential importance sampling for bipartite graphs with applications to likelihood-based inference

In the analysis of bipartite graphs, most research has focused on analytic methods of analyzing properties of the graph or calculating graph statistics. Where analytic methods are impractical, research has generally turned to approximation methods. The approximation method most commonly utilized is Markov chain Monte Carlo (MCMC), which often provides a means to representatively sample the graph space. The standard MCMC algorithm is that developed by Snijders (1991), and extended by Rao et al. (1996). While useful in solving many difficult problems, MCMC has its limitations, and alternative Monte Carlo methods are of interest. Recent research has brought to light the effectiveness of sequential importance sampling (SIS) in solving certain problems for which analytic methods and current MCMC algorithms do not provide good solutions or any solution at all.
Show more

33 Read more

networksis: A package to simulate bipartite graphs with fixed marginals through sequential importance sampling

networksis: A package to simulate bipartite graphs with fixed marginals through sequential importance sampling

In the analysis of bipartite graphs, most research has focused on analytic methods of ana- lyzing properties of the graph or calculating graph statistics. Where analytic methods are impractical, research has generally turned to approximation methods. The approximation method most commonly utilized is Markov chain Monte Carlo (MCMC), which often pro- vides a means to representatively sample the graph space. The standard MCMC algorithm is that developed by Snijders (1991) and extended by Rao, Jana, and Bandyopadhyay (1996). While useful in solving many difficult problems, MCMC has its limitations, and alternative Monte Carlo methods are of interest. Recent research has brought to light the effectiveness of sequential importance sampling (SIS) in solving certain problems for which analytic methods and current MCMC algorithms do not provide good solutions or any solution at all.
Show more

21 Read more

Global Sampling for Sequential Filtering over Discrete State Space

Global Sampling for Sequential Filtering over Discrete State Space

In many situations, there is a need to approximate a sequence of probability measures over a growing product of finite spaces. Whereas it is in general possible to determine analytic expressions for these probability measures, the number of computations needed to evaluate these quantities grows exponentially thus precluding real-time implementation. Sequential Monte Carlo tech- niques (SMC), which consist in approximating the flow of probability measures by the empirical distribution of a finite set of particles, are attractive techniques for addressing this type of problems. In this paper, we present a simple implementation of the sequential importance sampling/resampling (SISR) technique for approximating these distributions; this method relies on the fact that, the space being finite, it is possible to consider every offspring of the trajectory of particles. The procedure is straightforward to implement, and well-suited for practical implementation. A limited Monte Carlo experiment is carried out to support our findings.
Show more

13 Read more

A note on auxiliary particle filters

A note on auxiliary particle filters

and their statistics can be computed using Kalman techniques. For non-linear non-Gaussian methods, these distributions do not typically admit a closed- form and it is necessary to employ numerical approximations. Recently, the class of Sequential Monte Carlo (SMC) methods - also known as particle fil- ters - has emerged to solve this problem; see [6,11] for a review of the lit- erature. Two classes of methods are primarily used: Sequential Importance Sampling and Resampling (SISR) algorithms [3,11,5] and Auxiliary Particle Filters (APF) [12,1,13].

12 Read more

On extended state space constructions for monte carlo methods

On extended state space constructions for monte carlo methods

In this chapter, we develop algorithms for conducting inference in discretely observed piecewise deterministic processes. This class of models is defined in Section 4.2 where we also provide motivating examples. Section 4.3 describes an existing sequential Monte Carlo sampler for these models and investigates some of its properties. Section 4.4 derives a novel representation for this algorithm. In addition to ensuring the existence of the importance weights, this representation permits the use of backward sampling and ancestor sampling within particle Gibbs samplers and also allows the use of forward filtering–backward sampling schemes. Section 4.5 provides simulation results and comments on the utility of the novel particle Gibbs step which was presented in Subsection 3.4.4. An extended version of the work presented in this chapter was published as Finke et al. (2014).
Show more

243 Read more

Cross Domain Answer Ranking using Importance Sampling

Cross Domain Answer Ranking using Importance Sampling

We consider the problem of learning how to rank answers across domains in com- munity question answering using stylistic features. Our main contribution is an im- portance sampling technique for selecting training data per answer thread. Our ap- proach is evaluated across 30 community sites and shown to be significantly better than random sampling. We show that the most useful features in our model relate to answer length and overlap with question. 1 Introduction

5 Read more

A weight-bounded importance sampling method for variance reduction

A weight-bounded importance sampling method for variance reduction

Since its invention, the MC method has found vast applications in many fields of science and engineering, ranging from statistical physics [7] to fi- nancial engineering [3]. A well-known issue in the standard MC method is that it suffers from a rather slow convergence: the variance of an MC es- timator is proportional to 1/ √ n with n being the number of samples, and as a result, it may require a rather large number of samples to produce a reliable estimate in many practical problems. To this end, the technique of importance sampling (IS) [8, 10] is often used to reduce the variance, and simply speaking, the IS method draws samples from an alternative distri- bution (known as the IS distribution) instead of the original one, and then corrects for the biasing caused by using the altering the distribution by as- signing appropriate weight to each sample. Designing IS distribution is the key in the implementation of the IS method, and a good IS distribution can significantly improve the sampling efficiency. On the other hand, if the sam- pling distribution is not properly designed, the IS simulation will perform poorly and in some extreme cases, it may fail completely, in the sense that it results in infinite estimator variance [5]. In this case, the IS method may yield completely wrong estimates. Unfortunately, it is usually not possible to know in advance whether the chosen IS distribution is appropriate. To this end, it becomes a rather important task to develop methods that can prevent the infinite estimator variance of standard IS. To address the issue, a scheme called defensive IS (DIS) was proposed in [6], where the basic idea is use a mixture of the chosen IS distribution and one that is used as a safeguard. In practice, the distribution used as the safeguard is usually the original distribution. The idea was further extended and improved in [9].
Show more

14 Read more

Adaptive Importance Sampling from Finite State Automata

Adaptive Importance Sampling from Finite State Automata

We have presented an adaptive importance sam- pler that can be used to approximate expected val- ues taken over the languages of probabilistic reg- ular tree automata. These values play a central role in many natural language processing appli- cations and cannot always be computed analyti- cally. Our sampler adapts itself for improved per- formance and only requires the ability to evaluate all involved functions on single trees. To achieve adaptiveness, we have introduced a convex objec- tive function which does not depend on a complex normalization term. We hope that this simple tech- nique will allow researchers to use more complex models in their research.
Show more

10 Read more

A flexible importance sampling method for integrating subgrid processes

A flexible importance sampling method for integrating subgrid processes

(over height levels) were generated for simulations with 32 sample points. To reduce the role of a “lucky” random seed in the comparison and thereby better distinguish the meth- ods, an ensemble of 12 simulations was used. Figure 3 shows profile plots of the four tendencies over height levels. These plots are averaged over all 864 timesteps and over the 12 en- semble members, and serve to indicate that SILHS converges to the analytic solution at all height levels and not only the importance sampling level. Figure 4 shows the RMSE of the SILHS solutions at each height level compared to the analytic solution, for all timesteps and ensemble members. It can be seen that the 8Cat and 2Cat-CldPcp methods show improved results compared to the 2Cat-Cld method at height levels be- tween 1000 and 2500 m. These height levels are where the improvement in the evaporation term is strongest. At levels below 1000 m (which are far below the importance sampling level of about 2000 m), all methods start to show consider- able noise. It is interesting that this noise remains even after time and ensemble averaging. This highlights the large de- gree of variability in cumulus clouds and the need for careful parameterization of this variability.
Show more

17 Read more

Efficient sequential sampling for global optimization in static and dynamic environments

Efficient sequential sampling for global optimization in static and dynamic environments

Since the DIN sampling strategy (Section 4.2.5) requires parameter tuning for the noise level, each experiment has to be run in two steps. The first step is to find out the optimal noise level s ∗ by running a first set of simulations of the optimizer using the DIN sampling strategy with different noise discount values, and empirically choosing the one with best performance. Given the high computational cost that running a full set of experiments entails, we restrict the full analysis to only one performance measure. In this case, offline error is chosen as the preferred measure of performance since this reflects a real life scenario where the best known solution would be implemented, while the search for a better parameter configuration carries on. So the remainder of the experiments focus mainly on this performance measure, but the same procedure would apply for the average error. Since the changes of the objective function are stochastic, several replications are required to provide statistical significance to the interpretation of the results. So, R = 64 replications were run in this first part of the experiment.
Show more

134 Read more

Adaptive and unequal probability survey designs for environmental management

Adaptive and unequal probability survey designs for environmental management

The efficiency of adaptive cluster sampling depends firstly on how clustered the population is and, secondly, on the survey design. As a general principle, the more clustered the population is, the more efficient adaptive cluster sampling is compared with simple random sampling. The design choices in adaptive cluster sampling are the sample unit size and shape, the size of the initial sample, the criteria for adaptive selection, and the neighborhood definition (e.g. the surrounding 2, 4 or 8 neighboring units). There is considerable literature on how to design an efficient survey and much of this is reviewed in Smith et al. (2004), and Turk and Borkowski (2005). A general principle is that efficient designs will be where the final sample size is not excessively larger than the initial sample size, and which has small networks. This can be achieved by using large criteria for adapting and small neighborhood definitions (Brown 2003).
Show more

7 Read more

The importance of voting order for decisions by sequential majority voting

The importance of voting order for decisions by sequential majority voting

determine which is the actual state, as well as possible, the firm convenes a jury of experts. For example if the decision of the State is whether to send someone to jail, it convenes a legal jury to decide whether the defendant is guilty A or innocent B. The organization running Wimbledon forms an umpiring team to determine whether a ball is In or Out, and awards the point accordingly. More often, the jury may consist of economists who aim to determine the future state of the economy in order to decide whether to make an investment. In all these cases, the experts (jurors) will in general have different abilities (or expertise, judgement, eyesight, economic knowledge) to determine the actual state of Nature. In this paper we assume that the jurors obtain their verdict by majority rule in an open sequential vote, also known as roll call voting, and their common aim is to maximize the probability that their verdict is correct (called strategic voting ). We call such voters jurors because they are voting for the truth rather than for their preferred outcomes. It is often assumed in the literature that voting order does not matter, as each voter can assume he is making the pivotal vote. However, for heterogeneous juries, the voting order assuredly does matter, as voters need to know not only how many of the others voted each way but also which jurors voted each way. This observation is the starting point of our investigations.
Show more

27 Read more

Adaptive list sequential sampling method for population-based observational studies

Adaptive list sequential sampling method for population-based observational studies

approximation technique proposed by Hájek [17,18]. The set of participants s was obtained with the adaptive list sequential sampling method. Before the recruitment period started, we specified the vector π (0) . We con- sidered a vector π (0) , in which the probability of being included in s was proportional to the size of group g in the population. Because not all groups were observed with the same frequency in D, we oversampled the smaller sub- groups in such a way that each group g was observed with similar frequency in s. For each invited indivi- dual with x = 1, we have to invite 2, 2, 4, and 4 individuals with respectively x = 2, 3, 4, 5 to obtain an equal number of individuals from each group in s. Therefore, depending on the value of x i , we used the following probabilities for
Show more

9 Read more

Importance sampling for multimodal functions and application to pricing
exotic options

Importance sampling for multimodal functions and application to pricing exotic options

The authors assume a capability to identify the important modes of the importance function. [They do not define pre- cisely this notion. Loosely speaking, a mode is important if the function is large at the mode (or the integral is large at a region appropriately linked to the mode) relative to the other modes.] They choose the degrees of freedom for each t component based on application-specific considerations. Their procedure performs constrained continuous minimiza- tion of a Monte Carlo estimate of the squared coefficient of variation, where: (a) the components are initially centered at the known modes; and (b) the decision variables are the mixture weights, the mean vectors, and covariance matrices of all the components.
Show more

9 Read more

Sequential Clustering and Contextual Importance Measures for Incremental Update Summarization

Sequential Clustering and Contextual Importance Measures for Incremental Update Summarization

Traditional extractive multi-document summarization approaches (Nenkova and McKeown, 2011) ex- tract unmodified sentences from source documents to produce a summary. Graph-based approaches (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Parveen and Strube, 2015) represent source docu- ments as graph and use algorithms such as HITS (Kleinberg, 1999) and PageRank (Brin and Page, 2012) to find important information. Centroid-based summarization (Carbonell and Goldstein, 1998; Radev et al., 2000) systems estimate sentences importance by computing their centrality in the source documents. Similarly to these systems, our approach extracts unmodified sentences and uses centrality as a signal for importance. The systems above perform a retrospective summarization, meaning that the systems analyze all source documents at once independently from their publication date. In IUS however, this is not possible, since important information has to be published as soon as possible. Furthermore, the mentioned systems create summaries of fixed lengths. In IUS we observe a situation where it is not clear in advance how long a summary has to be for a proper summarization of an event. Standard extractive summarization systems are therefore not suited for IUS.
Show more

12 Read more

Supplementary information for: Macromolecular modeling and design in Rosetta: new methods and frameworks

Supplementary information for: Macromolecular modeling and design in Rosetta: new methods and frameworks

Fast Protein Loop Sampling and Structure Prediction Using Distance-Guided. Sequential Chain-Growth Monte Carlo Method[r]

22 Read more

Importance sampling for unbiased on demand evaluation of knowledge base population

Importance sampling for unbiased on demand evaluation of knowledge base population

Knowledge base population (KBP) sys- tems take in a large document corpus and extract entities and their relations. Thus far, KBP evaluation has relied on judge- ments on the pooled predictions of exist- ing systems. We show that this evalua- tion is problematic: when a new system predicts a previously unseen relation, it is penalized even if it is correct. This leads to significant bias against new systems, which counterproductively discourages in- novation in the field. Our first contribu- tion is a new importance-sampling based evaluation which corrects for this bias by annotating a new system’s predictions on- demand via crowdsourcing. We show this eliminates bias and reduces variance using data from the 2015 TAC KBP task. Our second contribution is an implementation of our method made publicly available as an online KBP evaluation service. We pi- lot the service by testing diverse state-of- the-art systems on the TAC KBP 2016 cor- pus and obtain accurate scores in a cost ef- fective manner.
Show more

11 Read more

Conditional importance sampling and its application to ATM switch analysis

Conditional importance sampling and its application to ATM switch analysis

it is not possible to nd analytical expressions for such probabilities. Monte Carlo simula- tion quickly becomes intractable due to the low probabilities involved, although Importance Sampling (IS) techniques have been used as a means of increasing simulation eciency. Para- metric IS methods are not very eective in cases where the input processes are characterized by uniform input distributions (e.g., random delays), which arise frequently in communica- tion systems and networks. In this paper, we present a conditional IS scheme for systems with input processes that can be characterized by uniform input distributions. The scheme adaptively modies an initial biasing strategy as samples are taken and also incorporates a problem specic component that enables the algorithm to be applied to a diverse set of prob- lems. The overall approach is more eective than parametrically biasing the uniform input distributions. We use the conditional biasing algorithm to estimate rare jitter probabilities in ATM switches for CBR sources multiplexed with heterogeneous background VBR and CBR sources. For the experimental systems considered, we observe that the improvement in simulation eciency is inversely proportional to the probability being estimated. 1
Show more

24 Read more

Empirical Bayes Prediction for Variables Process Mean in Sequential Sampling Plan

Empirical Bayes Prediction for Variables Process Mean in Sequential Sampling Plan

The statistical analysis was applied in the acceptance sampling plan for inspecting large amounts and it was often impossible to inspect all the products in the batch which can be reduced the producer’s risk and consumer’s risk. The acceptance sampling plan was developed by Dodge and Romig in 1920. It can be classified by three main points as follows. Firstly, there is non-inspection. Next, 100% inspection and thirdly, random inspection, or sampling plan, meaning samples are taken randomly for possible defects of products and are then subject to acceptance or rejection. Advantages for acceptance sampling plan, the majority of the inspectors apply the acceptance sampling plan for testing for destruction, auditing in case of large lots or using suppliers with a good quality history because this method might reduce the damage by less handling of the products, reduced errors, and saves costs and time in the manufacturing process. The acceptance sampling plan by variables is the quantitative data which can be measured by a continuous scale and assumed to be a normal distribution. [1] The advantages are the variables sampling plan, which the production in the lots provide more information than the
Show more

6 Read more

Show all 10000 documents...