Table 2: Root mean-squared error of moment estimates for two mixture scenarios. The first row corresponds to the results for pseudo-extended MCMC when β is estimated and the remaining cases are for fixed β = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]. Results are calculated over 20 independent simulations and reported to two decimal places with bold font indicating the lowest RMSE in each column.
MarkovchainMonteCarlo algorithms are widely used for approximately sampling from com- plicated probability distributions. However, it is often necessary to tune the scaling and other parameters before the algorithm will converge eﬃciently. Adaptive MCMC algorithms modify their transitions on the ﬂy, in an eﬀort to automatically tune the parameters and improve convergence. Some adaptive MCMC methods use regeneration times and other somewhat complicated con- structions, see  and . However, Haario et al.  proposed an adaptive Metropolis algorithm attempting to optimise the proposal distribution, and proved that a particular version of this al- gorithm correctly converges strongly to the target distribution. The algorithm can be viewed as a version of the Robbins-Monro stochastic control algorithm, see  and . The results were then generalized proving convergence of more general adaptive MCMC algorithms, see  and .
Th e MarkovchainMonteCarlo method is used to sample from empirical prob- ability density of a stock price. Th e technique is fl exible and requires just the ability to calculate probability at any given point. Furthermore, MCMC was successfully applied to one-factor models for the interest rate (B. Eraker, 2001). Th is also acts as the reason- ing for choosing it for this approach of modelling stock prices.
The coefficient of variation ( CV ) of a population is defined as the ratio of the population standard deviation to the population mean. It is regarded as a measure of stability or uncertainty, and can indicate the relative dispersion of data in the population to the population mean. In this article, based on the upper record values, we study the behavior of the CV of a random variable that follows a Lomax distribution. Specifically, we compute the maximum likelihood es- timations (MLEs) and the confidence intervals of CV based on the observed Fisher information matrix using asymptotic distribu- tion of the maximum likelihood estimator and also by using the bootstrapping technique. In addition, we propose to apply MarkovChainMonteCarlo (MCMC) techniques to tackle this problem, which allows us to construct the credible intervals. A numerical ex- ample based on a real data is presented to illustrate the implemen- tation of the proposed procedure. Finally, MonteCarlo simulations are performed to observe the behavior of the proposed methods. General Terms:
(M C) 3 differs from our work in that communication between chains is infre- quent, thus the chains can be executed across networked computers. The aims are also very different - (M C) 3 increases the mixing of the chain, improving the chances of discovering alternative solutions and helping avoid the simulation becoming stuck in local optima. Essentially it reduces the number of iterations required for the simulation to converge, whereas our method reduces the time required to perform a number of iterations. The two approaches will comple- ment each other, particularly since (M C) 3 permits its chains to be spread over
Abstract. Bayesian inference often requires integrating some function with respect to a posterior distribution. MonteCarlo methods are sampling algorithms that allow to com- pute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common MonteCarlo algorithms, among which rejec- tion sampling, importance sampling and MonteCarloMarkovchain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of MonteCarlo in experimental physics, and point to landmarks in the literature for the curious reader.
P(θ|D) ∝ P(D|θ)P(θ). (1) In principle Equation 1 defines the posterior prob- ability of any value of θ, but computing this may not be tractable analytically or numerically. For this reason a variety of methods have been developed to support approximate Bayesian inference. One of the most popular methods is MarkovchainMonteCarlo (MCMC), in which a Markovchain is used to sam- ple from the posterior distribution.
Abstract - Hand-gesture recognition is a method of non-verbal communication for physically drained people for its freer expression much more other than body parts. It is the best method to interact with the computer without using other peripheral devices such as keyboard, mouse. The main objective of this project is to present a vision-based user interface designed to achieve computer accessibility for the people who lack hand fine motor skills. A detection and tracking framework was constructed with unique descriptive features for face and hand representation. A method based on Hidden Markov Model (HMM) is presented for gesture trajectory modeling and recognition. HMM is a powerful statistical tool for modeling a wide range of time series data. MarkovChainMonteCarlo (MCMC) plays a positive role in Bayesian statistical calculation. Image classification is one of classical problem of concern in image processing. Artificial Neural Network classifier is used to classify the gestures. Once gesture is identified, the appropriate command for it will be executed.
In the last 20 years the applicability of Bayesian inference has been substantially improved through the use of MarkovchainMonteCarlo (MCMC) methods. MCMC involves the creation of an ergodic Markovchain whose stationary distri- bution is equal to P ( θ |D, M) such that, once the chain has converged, it can be used to generate samples from posterior parameter distributions with complex geometries. ‘Classical’ MCMC methods such as the Metropolis algorithm  and Hybrid MonteCarlo  can be used to address this first level of inference while, in the present-day, advanced algorithms such as Reversible Jump MCMC , Transitional MCMC , Asymptotically Independent Markov Sampling  and Nested Sampling  are also capable of addressing Bayesian model selection.
For sampling methods based on Markov chains that explore the space locally, like the RWM and MALA, it may be advantageous to instead impose a different metric structure on the space, X , so that some points are drawn closer together and others pushed further apart. Intuitively, one can picture distances in the space being defined, such that if the current position in the chain is far from an area of X , which is “likely to occur” under π(·), then the distance to such a typical set could be reduced. Similarly, once this region is reached, the space could be “stretched” or “warped”, so that it is explored as efficiently as possible.
hansen, 2011; Del Moral et al., 2006) and MarkovchainMonteCarlo (MCMC, see, e.g., Robert and Casella, 2004; Liu, 2001) methods in particular have found application to a wide range of data analysis problems involving complex, high-dimensional models. These include state-space models (SSMs) which are used in the context of time series and dynamical sys- tems modeling in a wide range of scientific fields. The strong assumptions of linearity and Gaussianity that were originally invoked for SSMs have indeed been weakened by decades of research on SMC and MCMC. These methods have not, however, led to a substan- tial weakening of a further strong assumption, that of Markovianity. It remains a major challenge to develop efficient inference algorithms for models containing a latent stochastic process which, in contrast with the state process in an SSM, is non-Markovian. Such non- Markovian latent variable models arise in various settings, either from direct modeling or via a transformation or marginalization of an SSM. We discuss this further in Section 6; see also Lindsten and Sch¨ on (2013, Section 4).
Madras and Randall  and Jerrum, Son, Tetali and Vigoda  have shown how to derive estimates for spectral gaps and logarithmic Sobolev constants of the generator of a Markovchain from corresponding local estimates on the sets of a decomposition of the state space combined with estimates for the projected chain. This has been applied to tempering algorithms in ,  and . We now develop related decomposition techniques for sequential MCMC. However, in this case, we will assume only local estimates for the generators L t , and no
In this paper, we propose an original approach to the solution of Fredholm equations of the second kind. We interpret the standard von Neumann expansion of the solu- tion as an expectation with respect to a probability distribution defined on a union of subspaces of variable dimension. Based on this representation, it is possible to use trans-dimensional MarkovChainMonteCarlo (MCMC) methods such as Reversible Jump MCMC to approximate the solution numerically. This can be an attractive alternative to standard Sequential Importance Sampling (SIS) methods routinely used in this context. To motivate our approach, we sketch an application to value function estimation for a Markov decision process. Two computational examples are also provided.
In this paper, the Bayes estimators of the unknown parameters of the Lomax distribution under the assumptions of gamma pri- ors on both the shape and scale parameters are considered. The Bayes estimators cannot be obtained in explicit forms. So we pro- pose MarkovChainMonteCarlo (MCMC) techniques to gen- erate samples from the posterior distributions and in turn com- puting the Bayes estimators. Point estimation and confidence in- tervals based on maximum likelihood and bootstrap methods are also proposed. The approximate Bayes estimators obtained un- der the assumptions of non-informative priors, are compared with the maximum likelihood estimators using MonteCarlo simula- tions. One real data set has been analyzed for illustrative purposes. General Terms:
In the context of nonparametric Bayesian estimation a MarkovchainMonteCarlo algo- rithm is devised and implemented to sample from the posterior distribution of the drift function of a continuously or discretely observed one-dimensional diffusion. The drift is modeled by a scaled linear combination of basis functions with a Gaussian prior on the co- efficients. The scaling parameter is equipped with a partially conjugate prior. The number of basis function in the drift is equipped with a prior distribution as well. For continuous data, a reversible jump Markovchain algorithm enables the exploration of the posterior over models of varying dimension. Subsequently, it is explained how data-augmentation can be used to extend the algorithm to deal with diffusions observed discretely in time. Some examples illustrate that the method can give satisfactory results. In these examples a comparison is made with another existing method as well.
progressive first-failure censored sampling, (Soliman, et.al 2011b) proposed a simulation-based approach to the study of coefficient of variation of Gompertz distribution under progressive first-failure censoring. Therefore, the purpose of this paper is to develops the Bayes estimates and MarkovChainMonteCarlo (MCMC) techniques to compute the credible intervals and bootstrap confidence intervals of the unknown parameters of Lomax distribution under the progressive first-failure censoring plan.
The estimator of T does not appear to have similarly desirable properties, at least not in the case of T ⫽ ∞. There are two reasons for this. First, the MonteCarlo variance for the parameter T seems to be quite large in many cases (compare Figure 3c to Figure 3d). Since the likelihood surface often is very flat for this parameter, the estimates may not be reliable. The second reason is that the integrated likelihood surface often has multiple peaks, e.g., Figure 3c. This is in fact a real property of the likelihood function, rather than an artifact of the MonteCarlo variance. The multimodality can easily be explained by considering the structure of the underly- ing gene genealogies and is a consequence of having only a limited number of migration events occurring in the ancestry of the sample. Consider the hypothetical case where the times of migration events in the geneal- ogy are known and fixed. Assuming low migration, the likelihood will then always be higher when T is slightly smaller than the age of the migration event than if T is slightly larger. The likelihood surface for T may therefore increase as T approaches the time of a migra- tion event and decrease at the time right after a migra- tion event. Obviously, the times of migration events are not known and fixed in real data, but may be quite well determined if there are sufficient nucleotide data, causing the integrated likelihood surface for T to have multiple modes.
Cascades of information, ideas, rumors, and viruses spread through networks. Sometimes, it is desirable to find the source of a cascade given a snapshot of it. In this paper, source inference problem is tackled under Independent Cascade (IC) model. First, the #P-completeness of source inference problem is proven. Then, a MarkovchainMonteCarlo algorithm is proposed to find a solution. It is worth noting that our algorithm is designed to handle large networks. In addition, the algorithm does not rely on prior knowledge of when the cascade started. Finally, experiments on real social network are conducted to evaluate the performance. Under all experimental settings, our algorithm identified the true source with high probability.
thors start with a reversible unbiased random walk on a one-dimensional finite lattice and then make two copies of the state space, one ‘upstairs’ for transitions to the ‘right’ and one ‘downstairs’ for transitions to the ‘left’, plus transitions between the two levels. This non- reversible chain converges more quickly according to two different distance metrics. Geyer and Mira (2000) reanalyze the same system, this time with respect to asymptotic variance, and find that the most efficient version of the non-reversible chain sweeps through the states in a deterministic way. In a related fashion, Neal (2004) constructs non-reversible chains from reversible chains and demonstrates that their asymptotic variance is no worse than the original reversible chains. Other non-reversible schemes are inspired by non-diffusive physical systems, such as a method for inserting ‘vortices’ by Sun et al. (2010).