We investigate the problem of bearings-only tracking of manoeuvring targets using **particle** **filters** (PFs). Three diﬀerent (PFs) are proposed for this problem which is formulated as a multiple model tracking problem in a jump Markov system (JMS) framework. The proposed **filters** are (i) multiple model PF (MMPF), (ii) auxiliary MMPF (AUX-MMPF), and (iii) jump Markov system PF (JMS-PF). The performance of these **filters** is compared with that of standard interacting multiple model (IMM)-based trackers such as IMM-EKF and IMM-UKF for three separate cases: (i) single-sensor case, (ii) multisensor case, and (iii) tracking with hard constraints. A conservative CRLB applicable for this problem is also derived and compared with the RMS error performance of the **filters**. The results confirm the superiority of the PFs for this diﬃcult nonlinear tracking problem.

Show more
15 Read more

15 Read more

In this paper, we propose a multiple object detection and tracking method based on the Gaussian mixture modeling (GMM) and multiple **particle** **filters** (MPFs) algorithm. In the first step, we improve the Gaussian mixture modeling method to construct the background model for moving objects segmentation. The original Gaussian mixture modeling, which is proposed by Stauffer and Grimson [1], is a popular algorithm that has been widely adopted for moving object detection. However, the original method has high computational cost and it is sensitive to object shadows. Although researchers have proposed variable approaches to deal with these difficulties [2-8], there are still some problems not fully resolved. A typical example is the work proposed in [4], in which moving object shadows can be removed to reduce the interference of background reconstruction. Some methods improve the speed of background modeling, while they pay the price for inaccuracy [6-8]. By analyzing the GMM algorithm, we found that some pixels may take a long time to be confirmed as background pixels due to the update process of GMM. To tackle this problem, we use the Expectation Maximization algorithm (EM) and the M recent frames to accelerate the update progress of the GMM algorithm.

Show more
Abstract—The **particle** filtering technique with multiple cues such as colour, texture and edges as observation features is a powerful technique for tracking deformable objects in image sequences with complex backgrounds. In this paper, our re- cent work [1] on single object tracking using **particle** **filters** is extended to multiple objects. In the proposed scheme, track initialisation is embedded in the **particle** filter without rely- ing on an external object detection scheme. The proposed scheme avoids the use of hybrid state estimation for the es- timation of number of active objects and its associated state vectors as proposed in [2]. The number of active objects and track management are handled by means of probabilities of the number of active objects in a given frame. These proba- bilities are shown to be easily estimated by the Monte Carlo data association algorithm used in our algorithm.

Show more
In the control of an inverted pendulum, we assume that there is dependency between the deflection of pendulum and the other control variables corresponding to the deflection, for example, the force necessary given to the cart of the pendulum to make an inverted pendulum stand upright. The force is not observed directly but can be estimated by an observable state, i.e., the angle of the pendulum from its upright position. In general, **particle** **filters** can only track observable states, whereas our proposed method can estimate unobservable states by employing the architecture of sharing likelihood through multiple sets of **particle** **filters**.

Show more
Such two-step-based blink detection system requires that the tracking algorithms are capable of handling the appear- ance change between the open eyes and the closed eyes. In this work, we propose an alternative way that simultaneously tracks eyes and detects eye blinks. We use two interactive **particle** **filters**, one tracks the open eyes and the other one tracks the closed eyes. Eye detection algorithms can be used to give the initial position of the eyes [10–12], and after that the interactive **particle** **filters** are used for eye tracking and blink detection. The set of particles that gives higher con- fidence is defined as the primary **particle** set and the other one is defined as the secondary **particle** set. Estimates of the eyes’ location, as well as the eye class labels (open-eye ver- sus closed-eye), are determined by the primary **particle** filter, which is also used to reinitialize the secondary **particle** fil- ter for the new observation. For each **particle** filter, the state variables characterize the location and size of the eyes. We use autoregression (AR) models to describe the state transitions, where the location is modeled by a second-order AR and the scale is modeled by a separate first-order AR. The observa- tion model is a classification-based model, which tracks eyes according to the knowledge learned from examples instead of the templates adapted from previous frames. Therefore, it can avoid accumulation of the tracking errors. In our work, we use a regression model in tensor subspace to measure the posterior probabilities of the observations. Other classifica- tion/regression models can be used as well. Experimental re- sults show the capability of the algorithm.

Show more
17 Read more

In recent years, **particle** filtering has become a powerful tool for tracking signals and time-varying parameters of random dynamic systems. These methods require a mathematical representation of the dynamics of the system evolution, together with assumptions of probabilistic models. In this paper, we present a new class of **particle** filtering methods that do not assume explicit mathematical forms of the probability distributions of the noise in the system. As a consequence, the proposed techniques are simpler, more robust, and more flexible than standard **particle** **filters**. Apart from the theoretical development of specific methods in the new class, we provide computer simulation results that demonstrate the performance of the algorithms in the problem of autonomous positioning of a vehicle in a 2-dimensional space.

Show more
17 Read more

13 Read more

Abstract— Recently, Rao-Blackwellized **particle** **filters** have been introduced as an effective means to solve the simultaneous localization and mapping problem. This approach uses a **particle** filter in which each **particle** carries an individual map of the environment. Accordingly, a key question is how to reduce the number of particles. In this paper, we present adaptive techniques for reducing this number in a Rao-Blackwellized **particle** filter for learning grid maps. We propose an approach to compute an accurate proposal distribution taking into account not only the movement of the robot but also the most recent observation. This drastically decreases the uncertainty about the robot’s pose in the prediction step of the filter. Furthermore, we present an approach to selectively carry out resampling operations which seriously reduces the problem of **particle** depletion. Experimental results carried out with real mobile robots in large-scale indoor as well as in outdoor environments illustrate the advantages of our methods over previous approaches.

Show more
12 Read more

Similar techniques can also be employed in a number of other settings. In addition to those which have already been discussed in the literature, the use of “local” **particle** **filters** to provide approximations of proposal distributions within a block-sampling framework is effective and can be justified as a standard Sequential Monte Carlo algorithm defined upon an extended space by employing further extensions of the auxiliary variable construction used here [Johansen and Doucet, 2012].

A coupling algorithm of thermo-chemical scalars on the FG solutions and fluid mechanics on the CG solutions using the Kalman and **Particle** **filters** is presented and validated. The problem is relevant to the multiscale simulation of turbulent combustion flows using large- eddy simulation coupled with a low-dimensional stochastic model for subgrid-scale physics. Traditional LES targets the important role played by large-scale processes in the transport of momentum and scalars; however, in combustion important physics resides at the unresolved scales of LES. The scheme establishes the potential of Kalman and **Particle** filtering in coupling the densities from the FG and CG solutions resulting in smooth filtered densities that are also steered by the FG solutions where heat release is modeled.

Show more
104 Read more

Intuitvely, if the state process mixes well, then the error at time t − 1 will be reduced when we go forward one time step using a propagation and an update step. However, the update step can make this intuition invalid, and there are examples of state space models with ergodic dynamics where the filter does not forget the initial distribution. Forget- ting of the initial distribution at a sufficiently fast rate does hold under an unrealistically strong condition of uniform mixing of the state process. This assumption has been used in most uniform-in-time convergence results for **particle** **filters**. Recently however, Douc et al. (2014a) have been able to prove such results under substantially weaker conditions. See Atar (2011) for a review of results about forgetting of the initial distribution by the filter.

Show more
31 Read more

and their statistics can be computed using Kalman techniques. For non-linear non-Gaussian methods, these distributions do not typically admit a closed- form and it is necessary to employ numerical approximations. Recently, the class of Sequential Monte Carlo (SMC) methods - also known as **particle** fil- ters - has emerged to solve this problem; see [6,11] for a review of the lit- erature. Two classes of methods are primarily used: Sequential Importance Sampling and Resampling (SISR) algorithms [3,11,5] and Auxiliary **Particle** **Filters** (APF) [12,1,13].

12 Read more

It has been noted that certain **particle** **filters**, such as the sequential importance sampling (SIS) filter, perform poorly when estimating static parameters (and across scale we do expect the parameters to remain static), as in such cases the problem of **particle** degeneracy is acute [(1)]. So, the **particle** filter chosen is the sequential importance resampling (SIR) filter, which resamples the posterior distribution at each step. In this case we resample all distributions as Gaussians giving a Gaussian **particle** filter [(2)].

As linguistic models incorporate more subtle nuances of language and its structure, stan- dard inference techniques can fall behind. Of- ten, such models are tightly coupled such that they defy clever dynamic programming tricks. However, Sequential Monte Carlo (SMC) ap- proaches, i.e. **particle** **filters**, are well suited to approximating such models, resolving their multi-modal nature at the cost of generating additional samples. We implement two par- ticle **filters**, which jointly sample either sen- tences or word types, and incorporate them into a Gibbs sampler for part-of-speech (PoS) inference. We analyze the behavior of the par- ticle **filters**, and compare them to a block sen- tence sampler, a local token sampler, and a heuristic sampler, which constrains inference to a single PoS per word type. Our findings show that **particle** **filters** can closely approx- imate a difficult or even intractable sampler quickly. However, we found that high poste- rior likelihood do not necessarily correspond to better Many-to-One accuracy. The results suggest that the approach has potential and more advanced **particle** **filters** are likely to lead to stronger performance.

Show more
The developed algorithm is an application of sequential Monte-Carlo methods (also known as **particle** **filters**) to 3D tracking using one or more cameras and one or more mi- crophone arrays. **Particle** **filters** were originally introduced in the computer vision area in the form of the CONDEN- SATION algorithm [2]. Improvements of a technical nature to the condensation algorithm were provided by Isard and Blake [3], MacCormick and Blake [4], Li and Chellappa [5], and Philomin et al. [6]. The algorithm has seen applica- tions to multiple aspects of both computer vision and signal processing. For example, a recent paper by Qian and Chel- lappa [7] describes a **particle** filter algorithm for the structure from motion problem using sparse feature correspondence which also performs the estimation of sensor motion from the epipolar constraint, and a recently published book [8] describes many di ﬀ erent applications in signal detection and estimation. Overall, it can be said that **particle** **filters** provide eﬀective solutions for challenging issues in diﬀerent areas of computer vision and signal processing.

Show more
11 Read more

The curse of dimensionality is a rather well-understood phe- nomenon in the statistical literature, and it is the reason why PF methods fail when applied to high-dimensional DA prob- lems. We have recalled the main results related to weight de- generacy of PFs, and why the use of localisation can be used as a solution. Yet implementing localisation in PF analysis raises two major issues: the gluing of locally updated parti- cles and potential physical imbalance in the updated parti- cles. Adequate solutions to these issues are not obvious, wit- ness the few but dissimilar LPF algorithms developed in the geophysical literature. In this article we have proposed a the- oretical classification of LPF algorithms into two categories. For each category, we have presented the challenges of lo- cal **particle** filtering and have reviewed the ideas that lead to practical implementation of LPFs. Some of them, already in the literature, have been detailed and sometimes generalised, while others are new in this field and yield improvements in the design of LPF methods.

Show more
43 Read more

In our analysis, we considered the memory requirement not only for resampling, but also for the complete PF. The mem- ory size of the weights and the memory access during weight computation do not depend on the resampling algorithm. We consider **particle** allocation without indexes and with in- dex addressing for the SR algorithm, and with arranged in- dexing for RSR, PR2, PR3, and OPR. For both **particle** allo- cation methods, the SR algorithm has to use two memories for storing particles. In Table 3, we can see the memory ca- pacity for the RSR, PR2, and PR3 algorithms. The di ﬀ erence among these methods is only in the size of the index memory. For the RSR algorithm which uses **particle** allocation with ar- ranged indexes, the index memory has a size of 2M, where M words are used for storing the addresses of the particles that are replicated or discarded. The other M words represent the replication factors.

Show more
11 Read more

trackers using region-based, Gaussian or histogram based methods for skin colour modeling The. evaluation is performed by averaging over the combinations of backgrounds and faces for eac[r]

128 Read more

If any of these conditions are not met, the PF algorithm can perform poorly. In the resampling stage, any particular sample with a high importance weight will be duplicated many times, and the cloud of samples may collapse to a single sample. Thus, the number of samples used to describe the posterior density function will become too small and inad- equate. We can get around this difficulty by implementing a Markov Chain Monte Carlo step after the selection step. But, this method is only successful if the point-mass poste- rior approximation is already a close approximation of the true posterior. One of the main causes of sample depletion is the failure to move particles to areas of high observation likelihood. This failure stems directly from the most common choice of importance distri- bution, the transition prior which do not incorporate the latest observation. To improve the performance of **particle** **filters**, we could design better proposal distribution that not only allow for easy sampling and evaluation of the importance weights, but also address the sample depletion problem. This can be done by choosing a proposal distribution that is conditioned on y k .

Show more
26 Read more