Monte Carlo methods

Top PDF Monte Carlo methods:

Quasi Monte Carlo and multilevel Monte Carlo methods for computing posterior expectations in elliptic inverse problems

Quasi Monte Carlo and multilevel Monte Carlo methods for computing posterior expectations in elliptic inverse problems

We are interested in computing the expectation of a functional of a PDE solution under a Bayesian posterior distribution. Using Bayes’ rule, we reduce the problem to estimating the ratio of two related prior expectations. For a model elliptic problem, we provide a full convergence and complexity analysis of the ratio estimator in the case where Monte Carlo, quasi-Monte Carlo or multilevel Monte Carlo methods are used as estimators for the two prior expectations. We show that the computational complexity of the ratio estimator to achieve a given accuracy is the same as the corresponding complexity of the individual estimators for the numerator and the denominator. We also include numerical simulations, in the context of the model elliptic problem, which demonstrate the effectiveness of the approach.
Show more

27 Read more

Performant Hybrid and Parallel Domain Decomposed Monte Carlo Methods for Radiation Transport.

Performant Hybrid and Parallel Domain Decomposed Monte Carlo Methods for Radiation Transport.

This document further develops performant algorithms for massively parallel, hybrid Monte Carlo methods for radiation transport. New algorithms are implemented in the Shift Monte Carlo code de- veloped at Oak Ridge National Laboratory. We provide the governing neutron transport equation and long-standing solution techniques. First, we explore the convergence of two acceleration techniques when the fixed point maps are corrupted with stochastic noise: Anderson Acceleration and Nonlinear Di ff usion Acceleration. Next, we consider the Monte Carlo algorithm for solving the transport equation. It simulates a finite number of particle histories from known probability distribution functions inside a given domain. It avoids discretization error associated with deterministic algorithms, but requires a significant number of random samples. The Shift code is targeted at leadership class high performance computing platforms, like the Titan and Summit supercomputers at the Oak Ridge Leadership Comput- ing Facility.
Show more

114 Read more

On Markov chain Monte Carlo methods for tall data

On Markov chain Monte Carlo methods for tall data

Markov chain Monte Carlo methods are often deemed too computationally intensive to be of any practical use for big data applications, and in particular for inference on datasets containing a large number n of individual data points, also known as tall datasets. In sce- narios where data are assumed independent, various approaches to scale up the Metropolis- Hastings algorithm in a Bayesian inference context have been recently proposed in machine learning and computational statistics. These approaches can be grouped into two cate- gories: divide-and-conquer approaches and, subsampling-based algorithms. The aims of this article are as follows. First, we present a comprehensive review of the existing lit- erature, commenting on the underlying assumptions and theoretical guarantees of each method. Second, by leveraging our understanding of these limitations, we propose an orig- inal subsampling-based approach relying on a control variate method which samples under regularity conditions from a distribution provably close to the posterior distribution of in- terest, yet can require less than O (n) data point likelihood evaluations at each iteration for certain statistical models in favourable scenarios. Finally, we emphasize that we have only been able so far to propose subsampling-based methods which display good performance in scenarios where the Bernstein-von Mises approximation of the target posterior distribution is excellent. It remains an open challenge to develop such methods in scenarios where the Bernstein-von Mises approximation is poor.
Show more

43 Read more

On extended state space constructions for monte carlo methods

On extended state space constructions for monte carlo methods

In this chapter, we introduce Markov chain Monte Carlo methods. Section 3.1 outlines the main idea behind this class of Monte Carlo schemes and shows that they can be viewed as (an approximation to) a special case of the mar- ginalised one-sample importance sampling scheme presented in Chapter 1. Using the same ideas again at a lower level, we construct a generic kernel which admits essentially every known Markov chain Monte Carlo kernel as a special case. This is done in Section 3.2. In Section 3.3, we demonstrate that multiple-proposal and ‘randomised’ Metropolis–Hastings kernels, pseudo- marginal kernels, and ensemble Markov chain Monte Carlo kernels can all be viewed as instances of the generic kernel. Other special cases, conditional sequential Monte Carlo kernels, which play a major rôle in Part II of this work, are detailed in Section 3.4. In particular, we show that the variance- reduction techniques: backward sampling and ancestor sampling share the same extended target distribution. To our knowledge, this is a new result.
Show more

243 Read more

On solving integral equations using Markov chain Monte Carlo methods

On solving integral equations using Markov chain Monte Carlo methods

Computing (3) is challenging as it involves an infinite sum of integrals of increasing dimension. Monte Carlo methods provide a mechanism for dealing with such integrals. A sequential importance sampling strategy arises as a nat- ural approach to this problem and that is the approach which has been taken most often in the literature. Section 2.1 summarises this approach and pro- vides a path-space interpretation of the importance sampling which motivates the development of a novel approach in section 2.2.

22 Read more

Development of Monte Carlo Methods in Hypersonic Aerodynamics

Development of Monte Carlo Methods in Hypersonic Aerodynamics

Abstract The advantage of Monte Carlo methods in the computational aerodynamics and application of these methods in rarefied fields are described in the present paper. The direct statistical simulation of aerodynamics processes with the solution of kinetic equations is considered. It is shown that the modern stage of the development of computational methods is impossible without the use of the complex approach (its physical nature, mathematical model, the theory of computational mathematics, and stochastic processes) to the development of algorithms. Main directions of the development of the direct simulation of Monte Carlo method in computational aerodynamics are discussed. Some calculation results by using Monte Carlo method are presented.
Show more

8 Read more

Some Monte Carlo methods for jump diffusions

Some Monte Carlo methods for jump diffusions

To motivate this class of Monte Carlo methods we begin by introducing HMMs in Sec- tion 3.1. HMMs are the natural and flexible framework under which the jump di↵usions we consider in thesis are observed, whereby at discrete points in time we observe some underlying evolving process of interest with error. HMMs are particularly appealing due to their suitability in tackling online problems (whereby sequential information must be processed on arrival without loss of computational e ffi ciency) which is a consequence of the recursive nature in which various inferential problems can be represented. In Section 3.1.1 we draw particular attention to the filtering problem that we tackle in this thesis (in which we use all observations to any point in time to make a probabilistic interpretation of the state of the process at that point in time), highlighting in Section 3.1.4 the situations in which solutions can be found. Unfortunately, for our purposes in this thesis we require methodology for problems in which analytic solutions can’t be found, which motivates our use of Monte Carlo methodology.
Show more

291 Read more

Stability of sequential Markov Chain Monte Carlo methods

Stability of sequential Markov Chain Monte Carlo methods

Abstract. Sequential Monte Carlo Samplers are a class of stochastic algorithms for Monte Carlo integral estimation w.r.t. probability distributions, which combine elements of Markov chain Monte Carlo methods and importance sampling/resampling schemes. We develop a stability analysis by funtional inequalities for a nonlinear flow of probability measures describing the limit behavior of the methods as the number of particles tends to infinity. Stability results are derived both under global and local assumptions on the generator of the underlying Metropolis dynamics. This allows us to prove that the combined methods sometimes have good asymptotic stability properties in multimodal setups where traditional MCMC methods mix extremely slowly. For example, this holds for the mean field Ising model at all temperatures.
Show more

10 Read more

Monte Carlo Methods on Complex Networks

Monte Carlo Methods on Complex Networks

This chapter focuses on the behaviour of the Ising model on complex networks which have been generated during a study of interference in wireless mobile phone networks in Dublin, Ireland [3]. The Ising model is a simple mathematical model with many degrees of freedom which can exhibit an order-disorder phase transition. The nature of this phase transition depends both on the type of lattice and the number of spatial dimensions of this lattice. It should be of no surprise then that embedding the Ising and Potts models on complex networks gives rise to some unique and unexpected behaviours. Using Monte Carlo methods, we study this critical behaviour and the effect of the network substrate on the Ising model. An excellent review of the Potts and Ising models on regular lattices can be found in Ref. [4] and we will often refer to this review when comparing lattice and complex network differences and similarities.
Show more

108 Read more

Pricing American Options using Monte Carlo Methods

Pricing American Options using Monte Carlo Methods

The pricing of options is a very important problem encountered in financial markets today. Many problems in mathematical finance entail the computation of a particular integral. In many cases these integrals can be valued analytically, and in still more cases they can be valued using numerical integration, or computed using a partial differential equation (PDE). The famous Black-Scholes model, for instance, provides explicit closed form solutions for the values of certain (European style) call and put options. However when the number of dimensions in the problem is large, PDEs and numerical integrals become intractable, the formulas exhibiting them are complicated and difficult to evaluate accurately by conventional methods. In these cases, Monte Carlo methods often give better results, because they have proved to be valuable and flexible computational tools to calculate the value of options with multiple sources of uncertainty or with complicated features.
Show more

31 Read more

Variance Reduction Techniques of Importance Sampling Monte Carlo Methods for Pricing Options

Variance Reduction Techniques of Importance Sampling Monte Carlo Methods for Pricing Options

In this paper we discuss the importance sampling Monte Carlo methods for pricing options. The classical importance sampling method is used to eliminate the variance caused by the linear part of the logarithmic function of payoff. The variance caused by the quadratic part is reduced by stratified sampling. We eliminate both kinds of variances just by importance sampling. The corresponding space for the eigenvalues of the Hessian matrix of the logarithmic function of payoff is enlarged. Computational Simulation shows the high efficiency of the new method.

6 Read more

Monte Carlo methods

Monte Carlo methods

Abstract. Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to com- pute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejec- tion sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
Show more

21 Read more

Monte Carlo methods for adaptive sparse approximations of time series

Monte Carlo methods for adaptive sparse approximations of time series

the stochastic gradient formulation naturally allows us to use just a small block x taken at a random location from the time- series in each iteration. The full data set does therefore not have to be kept in memory and the method is well suited for applications in which new data becomes available sequentially. If dealing with blocks instead of the complete time-series, end-effects have to be taken into account. For example, when inferring s for a given observation block x and a model matrix A , less information is available in the observation for those co- efficients s for which the associated column in A only contains a small part of a feature a k [13]. The advantage of the Monte Carlo methods studied in this paper is that this uncertainty is reflected in the full posterior distribution p( s | x , A ), so that the heuristics suggested with previous approaches [26] and [13] are not required.
Show more

14 Read more

Monte Carlo methods for linear and non-linear
          Poisson-Boltzmann equation*

Monte Carlo methods for linear and non-linear Poisson-Boltzmann equation*

Abstract. The electrostatic potential in the neighborhood of a biomolecule can be computed thanks to the non-linear divergence-form elliptic Poisson-Boltzmann PDE. Dedicated Monte- Carlo methods have been developed to solve its linearized version (see e.g. [7], [27]). These algorithms combine walk on spheres techniques and appropriate replacements at the boundary of the molecule. In the first part of this article we compare recent replacement methods for this linearized equation on real size biomolecules, that also require efficient computational geometry algorithms. We compare our results with the deterministic solver APBS. In the second part, we prove a new probabilistic interpretation of the nonlinear Poisson-Boltzmann PDE. A Monte Carlo algorithm is also derived and tested on a simple test case.
Show more

27 Read more

Limit theorems for weighted samples with applications to sequential Monte Carlo methods

Limit theorems for weighted samples with applications to sequential Monte Carlo methods

Abstract. In the last decade, sequential Monte-Carlo methods (SMC) emerged as a key tool in computational statistics (see for instance [3], [9], [7]). These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of particles, which are generated recursively.

7 Read more

Monte Carlo methods in PageRank computation: When one iteration is sufficient

Monte Carlo methods in PageRank computation: When one iteration is sufficient

PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of vis- iting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power itera- tion method which requires about one week of intensive computations. In the present work we propose and analyze Monte Carlo type meth- ods for the PageRank computation. There are several advantages of the probabilistic Monte Carlo methods over the deterministic power iteration method: Monte Carlo methods provide good estimation of the PageRank for relatively important pages already after one itera- tion; Monte Carlo methods have natural parallel implementation; and finally, Monte Carlo methods allow to perform continuous update of the PageRank as the structure of the Web changes.
Show more

20 Read more

Information geometric Markov chain Monte Carlo methods using diffusions

Information geometric Markov chain Monte Carlo methods using diffusions

Martin et al. [53] consider Bayesian inference for a statistical inverse problem, in which a surface explosion causes seismic waves to travel down into the ground (the subsurface medium). Often, the properties of the subsurface vary with distance from ground level or because of obstacles in the medium, in which case, a fraction of the waves will scatter off these boundaries and be reflected back up to ground level at later times. The observations here are the initial explosion and the waves, which return to the surface, together with return times. The challenge is to infer the properties of the subsurface medium from this data. The authors construct a likelihood based on the wave equation for the data and perform Bayesian inference using a variant of the manifold MALA. Figures are provided showing the local correlations present in the posterior and, therefore, highlighting the need for an algorithm that can navigate the high density region efficiently. Several methods are compared in the paper, but the variant of MALA that incorporates a local correlation structure is shown to be the most efficient, particularly as the dimension of the problem increases [53].
Show more

30 Read more

Sparse Estimation in Ising Model via Penalized Monte Carlo Methods

Sparse Estimation in Ising Model via Penalized Monte Carlo Methods

example, Honorio (2012) and Atchad´ e et al. (2017) analyzed stochastic versions of proxi- mal gradient algorithms. Both papers derive nonasymptotic bounds between the output of the algorithm and the true minimizer of the cost function. However, in the current paper we focus on model selection properties of MCMC methods. We investigate them in the high-dimensional scenario and compare to the existing methods that are mentioned above. Model selection for undirected graphical models means finding the existing edges in the “sparse” graph that is a graph having relatively few edges (comparing to the total number of possible edges d(d−1) 2 and the sample size n).
Show more

26 Read more

Towards Heavy Element Materials with Electronic Structure Quantum Monte Carlo Methods.

Towards Heavy Element Materials with Electronic Structure Quantum Monte Carlo Methods.

Another key goal is to increase the accuracy of the new ECPs beyond previous constructions, using measures that we define later. We explore a few strategies for constructing ECPs using correlated wave function methods from the outset. Ide- ally, these effective Hamiltonians should reproduce, as closely as possible, the behavior of the valence electrons with the original Hamiltonian in a vast range of chemical environ- ments, regardless of whether one calculates a molecule or a condensed system, in an equilibrium or non-equilibrium conformation, and within weak or strong bonding settings. Therefore, our goal is to construct an effective Hamiltonian that mimics the many-body valence spectrum, the spatial structure of eigenstates, and overall scattering properties of the origi- nal, relativistic, all-electron atom’s Hamiltonian. Clearly, some compromises will have to be made, and in this work, we inves- tigate the accuracy limits of ECPs of a simple semi-local form with almost a minimal number of parameters and we derive such ECPs for a small set of testing elements. As a guiding principle, we have in mind isospectrality for a subset of states, i.e., we demand that the all-electron and ECP spectra are as close as possible to a set of valence states. The isospectrality is a very general property and applies even in cases when the Hilbert spaces for the two isospectral operators are different, e.g., due to different boundary conditions and different spatial domains.
Show more

193 Read more

Genetic Algorithm Sequential Monte Carlo Methods For Stochastic Volatility And Parameter Estimation

Genetic Algorithm Sequential Monte Carlo Methods For Stochastic Volatility And Parameter Estimation

The Black-Scholes formulas are correct provided that the variance rate is set equal to the average variance rate during the life of the option. Equation (1) assumes that the instantaneous volatility of an asset is perfectly predictable. In practice volatility varies stochastically. This has led to the development of more complex models with two stochastic variables; the stock price and its volatility. The stochastic volatility estimation requires filtered estimates to account for estimation errors. Many authors have considered the problem of simulation based filtering with known parameters. A common approach is to use particle filtering methods, see for example Gordan, Salmond and Smith [7], Liu and Chen [10], and Pitt and Shepherd [12].
Show more

6 Read more

Show all 10000 documents...