The idea of updating a tree by adding leaves dates back to at least Felsenstein (1981), in which he describes, for maxi- mum likelihood estimation, that an effective search strategy in tree space is to add species one by one. More recent work also makes use of the idea of adding sequences one at a time: ARGWeaver (Rasmussen et al. 2014) uses this approach to initialise MCMC on (in this case, a space of graphs), t + 1 sequences using the output of MCMC on t sequences, and TreeMix (Pickrell and Pritchard 2012) uses a similar idea in a greedy algorithm. In work conducted simultaneously to our own, Dinh et al. (2018) also propose a sequentialMonteCarlo approach to inferring phylogenies in which the sequence of distributions is given by introducing sequences one by one. However, their approach: uses different proposal distribu- tions for new sequences; does not infer the mutation rate simultaneously with the tree; does not exploit intermediate distributions to reduce the variance; and does not use adaptive MCMC moves. Further investigation of their approach can be found in Fourment et al. (2018), where different guided proposal distributions are explored but that still presents the aforementioned limitations.
The sequentialMonteCarlo method in this paper is suc- cessfully applied to two seemingly diﬀerent areas in speech processing, speech enhancement, and speech recognition. This is possible because the graphical model shown in Figure 2 is applicable to the above two areas. The graphi- cal model incorporates two hidden state sequences: one is the speech state sequence for modeling transition of speech units, and the other is a continuous-valued state sequence for modeling noise statistics. With the sequentialMonteCarlo method, noise parameter estimation can be con- ducted via sampling the speech state sequences and updating continuous-valued noise states with 2N EKFs at each time. The highly parallel scheme of the method allows an eﬃcient parallel implementation.
SequentialMonteCarlo (SMC) refers to a class of methods designed to approximate a sequence of probability distributions over a sequence of probability space by a set of points, termed particles that each have an assigned non-negative weight and are updated recursively in time. SMC methods can be seen as a combination of the sequential importance sampling introduced method in  and the sampling importance resampling algorithm proposed in ; it uses a combination of mutation and selection steps. In the mutation step, the particles are propagated forward in time using proposal kernels and their importance weights are updated taking into account the targeted distribution. In the selection (or resampling) step, particles multiply or die depending on their fitness measured by their importance weights. Many algorithms have been proposed since, which diﬀer in the way the particles and the importance weights evolve and adapt.
Yan Z HOU , Adam M. J OHANSEN , and John A.D. A STON Model comparison for the purposes of selection, averaging, and validation is a problem found throughout statistics. Within the Bayesian paradigm, these problems all require the calculation of the posterior probabilities of models within a particular class. Substantial progress has been made in recent years, but difficulties remain in the implementation of existing schemes. This article presents adaptive sequentialMonteCarlo (SMC) sampling strategies to characterize the posterior distribution of a collection of models, as well as the parameters of those models. Both a simple product estimator and a combination of SMC and a path sampling estimator are considered and existing theoretical results are extended to include the path sampling variant. A novel approach to the automatic specification of distributions within SMC algorithms is presented and shown to outperform the state of the art in this area. The performance of the proposed strategies is demonstrated via an extensive empirical study. Comparisons with state- of-the-art algorithms show that the proposed algorithms are always competitive, and often substantially superior to alternative techniques, at equal computational cost and considerably less application-specific implementation effort. Supplementary materials for this article are available online.
The algorithm proposed in this paper is based on sequentialMonteCarlo methods which offer a flexi- ble framework to approximate such distributions with weighted empirical measures associated with random samples. At each time step, the samples are moved ran- domly in R d and associated with importance weights. In general situations, the computation of these importance weights involve the unknown transition density of the pro- cess (1). The solution introduced in Section 3 requires an unbiased estimator of these unknown transition densities. Moreover, this estimator must be almost surely positive and upper bounded. Statistical inference of stochastic differential equations is an active area of research, and several solutions have been proposed to design unbiased estimates of these transition densities. Those estimators require different assumptions on the model (1), we pro- vide below several solutions that can be investigated.
ciated stochastic master equations (SMEs), provide a means to monitor the dynamical evolution of a quan- tum system and to provide an estimate of the underly- ing quantum state. In addition, the quantum trajecto- ries resulting from the integration of stochastic master equations contain useful information about the param- eters that govern the evolution of the system. Hybrid stochastic master equations provide a means to extract the information regarding these classical parameters. Hy- brid SMEs involve running many parallel SMEs, each one having a different value for the parameter (or parame- ters). The classical probabilities attached to the individ- ual SMEs and the associated parameter values can then be found by integrating a Kushner-Stratonovich equa- tion. This classical estimation process is numerically costly, and is even more so when estimates are required for multiple parameters. This paper has demonstrated how such estimates can be found using a technique taken from classical state estimation and nonlinear filtering, a SequentialMonteCarlo (SMC) sampler. The SMC sam- pler used in this paper has been demonstrated to allow the simultaneous estimation of three Hamiltonian param- eters, together with their statistical correlation and the associated quantum trajectories, in a computationally tractable form, with a relatively small number of can- didate parameter values and parallel SMEs.
The advancement of digital technology has increased the deployment of wireless sensor networks (WSNs) in our daily life. However, locating sensor nodes is a challenging task in WSNs. Sensing data without an accurate location is worthless, especially in critical applications. The pioneering technique in range-free localization schemes is a sequentialMonteCarlo (SMC) method, which utilizes network connectivity to estimate sensor location without additional hardware. This study presents a comprehensive survey of state- of-the-art SMC localization schemes. We present the schemes as a thematic taxonomy of localization operation in SMC. Moreover, the critical characteristics of each existing scheme are analyzed to identify its advantages and disadvantages. The similarities and differences of each scheme are investigated on the basis of significant parameters, namely, localization accuracy, computational cost, communication cost, and number of samples. We discuss the challenges and direction of the future research work for each parameter.
An intuitive solution is to build a scene-specialized detector that provides a higher performance than a generic detector using labeled samples from the target scene. On the other hand, labeling data manually for each scene and repeating the training process several times, according to the number of object classes in the target scene, are arduous and time-consuming tasks. A functional solution to keep away from these tasks is to automatically label samples from the target scene and to transfer only a set of useful samples from the labeled source dataset to the target specialized one. Our work moves along this direction. We suggest an original formal- ization of transductive transfer learning (TTL) based on a sequentialMonteCarlo (SMC) filter  to specialize a generic classifier to a target scene. In the proposed for- malization, we estimate a hidden target distribution using a source distribution in which we have a set of annotated samples, in order to give an estimated target distribu- tion as an output. We consider samples of the training dataset as realizations of the joint probability distribution between samples’ features and object classes.
In the present paper, motivated by the results reported in , we develop two sequentialMonteCarlo algorithms: a particle filter and a mixture Kalman filter (MKF) for solv- ing the problem of tracking and classifying a maneuvering target using kinematic measurements only. Two air target classes are considered: commercial aircraft (slowly maneu- verable, mainly straight line) and military aircraft (highly maneuverable turns are possible). We should be able to un- derstand which type of aircraft we are observing. In view the fact that both types of aircraft can perform slow ma- neuvers, the recognition can only be achieved during the aircraft’s maneuvers with high speed and acceleration. For this purpose, a bank of two multiple model (MM) class- dependent particle filters is designed and implemented. The novelty of the paper relies also on accounting for two kinds of constraints : both on the acceleration and on the speed. We show that “hard constraints” can be naturally incorpo- rated into the MonteCarlo framework. Two speed like- lihood functions are defined based on a prior information about speed constraints of each class. Such kind of con- straints are incorporated in other approaches for decision making . At each filtering step, the estimated speed from each class-dependent filter is used to calculate a class- dependent speed likelihood and together with kinematic likelihood both are improving the classification process.
probabilities, is developed using sequentialMonteCarlo (SMC) techniques. Being soft-input and soft-output in na- ture, the proposed SMC detector is capable of exchanging the so-called extrinsic information with other component in the above turbo receiver, and successively improving the overall receiver performance. Finally, we have also treated channel-coded systems, and a novel blind turbo receiver is developed for joint demodulation, channel decoding, and MDSQ decoding. Simulation results have demonstrated the e ﬀ ectiveness of the proposed techniques.
We present a sequentialMonteCarlo (SMC) method for maximum likelihood (ML) parameter estimation in latent variable models. Stan- dard methods rely on gradient algorithms such as the Expectation- Maximization (EM) algorithm and its MonteCarlo variants. Our approach is different and motivated by similar considerations to sim- ulated annealing (SA); that is we propose to sample from a sequence of artificial distributions whose support concentrates itself on the set of ML estimates. To achieve this we use SMC methods. We con- clude by presenting simulation results on a toy problem and a non- linear non-Gaussian time series model.
This paper presents Rao-Blackwellized sequentialMonteCarlo methods to approximate smoothing distributions in conditionally linear and Gaussian state spaces in a common unifying framework. It also provides different techniques that could be used in the forward filtering pass to improve significantly the usual mixture Kalman fil- ter. The filtering distributions are approximated at each time step by considering all possible offsprings of all ancestral trajectories before discarding degenerated paths instead of resampling the ancestral paths before propa- gating them at the next time step. The paper investigates the benefit of additional Rao-Blackwellization steps to sample new regimes at each time step conditional on the forward and backward particles. This rejuvenation step uses explicit integration of the hidden linear states before merging the forward and backward filters for two- filter based algorithms or before sampling new states backward in time for FFBS based methods. The paper displays some MonteCarlo experiments with simulated data to illustrate that this additional rejuvenation step improves the performance of the smoothing algorithms with no substantial additional computational costs. They are also applied to commodity markets using WTI crude oil data.
We present in this paper a sequentialMonteCarlo methodology for joint detection and tracking of a multiaspect target in im- age sequences. Unlike the traditional contact/association approach found in the literature, the proposed methodology enables integrated, multiframe target detection and tracking incorporating the statistical models for target aspect, target motion, and background clutter. Two implementations of the proposed algorithm are discussed using, respectively, a resample-move (RS) par- ticle filter and an auxiliary particle filter (APF). Our simulation results suggest that the APF configuration outperforms slightly the RS filter in scenarios of stealthy targets.
We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t + 1 is used for implementation of novel sequentialMonteCarlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.
Feature matching approaches provide a general basis for fitting mechanistic models of disease. These methods bypass computing a likelihood and instead fit models of disease transmission to phylogenies by using simulation to match summary statistics. A number of feature matching approaches have been proposed for phylodynamic inference. Ratmann et al. (2012) proposed an Approximate Bayesian computation (ABC) method, which was then applied to study the dynamics of influenza. Poon (2015) developed an ABC method based on a kernel function from computational linguistics, and showed its utility in a study on HIV. Giardina et al. (2017) used an approach that combines ABC and sequentialMonteCarlo to infer contact network structure from phylogeny. This methodology is based on a more general ABC-SMC approach proposed by Toni et al. (2009). Feature matching methods have the advantage of being simulation-based, and therefore in principle allow for fitting mechanistic models of arbitrary complexity. However, they have the disadvantage of having no systematic criteria to define which summary statistics are optimal. Likelihood- based methods, when feasible, allow for more efficient use of information in the data.
Abstract. SequentialMonteCarlo (SMC) methods have demonstrated a strong potential for infer- ence on the state variables in Bayesian dynamic models. In this context, it is also often needed to calibrate model parameters. To do so, we consider block maximum likelihood estimation based either on EM (Expectation-Maximization) or gradient methods. In this approach, the key ingredient is the computation of smoothed sum functionals of the hidden states, for a given value of the model param- eters. It has been observed by several authors that using standard SMC methods for this smoothing task requires a substantial number of particles and may be unreliable for larger observation sample sizes. We introduce a simple variant of the basic sequential smoothing approach based on forgetting ideas. This modification, which is transparent in terms of computation time, reduces the variability of the approximation of the sum functional. Under suitable regularity assumptions, it is shown that this modification indeed allows a tighter control of the L p error of the approximation.
This work presents the current state-of-the-art in techniques for tracking a number of objects moving in a coordinated and interacting fashion. Groups are structured objects characterized with particular motion patterns. The group can be comprised of a small number of interacting objects (e.g. pedestrians, sport players, convoy of cars) or of hundreds or thousands of components such as crowds of people. The group object tracking is closely linked with extended object tracking but at the same time has particular features which differentiate it from extended objects. Extended objects, such as in maritime surveillance, are characterized by their kinematic states and their size or volume. Both group and extended objects give rise to a varying number of measurements and require trajectory maintenance. An emphasis is given here to sequentialMonteCarlo (SMC) methods and their variants. Methods for small groups and for large groups are presented, including Markov Chain MonteCarlo (MCMC) methods, the random matrices approach and Random Finite Set Statistics methods. Eﬃcient real-time implementations are discussed which are able to deal with the high dimensionality and provide high accuracy. Future trends and avenues are traced.
which modeling the degradation, is of a great concern. With updates of the state estimation and prediction, the RUL can be estimated by the system failure through a predefined degradation threshold . This paper is organized as follows. A brief review based on the Bayes- ian estimation method from Markov chain MonteCarlo (MCMC) to SequentialMonteCarlo (SMC) is summa- rized in Section 2. The state estimation methodology for a given SSM with time invariant parameters and the RUL online assessment from the degrading process is detailed in Section 3. A case study is introduced and the results are discussed in Section 4. The final conclusion is pre- sented in Section 5.
A large portion of the inference performed on epidemic data is conducted using Markov chain MonteCarlo (MCMC) methods. This is due to the highly flexible nature of these algorithms, as well as their ability to utilise data augmentation techniques to handle the problem of missing data. However, infectious disease outbreaks will often occur rapidly, with new information being obtained daily. With each new piece of data an MCMC algorithm must restart to produce parameter estimates. As a field of research that benefits greatly from on-line inference, MCMC methods do not appear to be the most suitable choice. Intuitively, a sequential method of updating the parameter estimates as new information is obtained would be better suited to the problem. This will act as the motivation for developing sequentialMonteCarlo (SMC) methods with applications to epidemic data. The question that we will henceforth be focusing on is: how can we update the samples produced from the posterior distribution at time t to incorporate the new data collected at time t + 1?