continuous-time markov chains

Top PDF continuous-time markov chains:

Multilevel Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics

Multilevel Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics

There are multiple algorithms available to compute exact sample paths of continuous time Markov chains, and, though they are all only slight variants of each other, they go by different names depending upon the branch of science within which they are being applied. These include the Bortz-Kalos-Lebowitz algo- rithm, discrete event simulation, dynamic Monte Carlo, kinetic Monte Carlo, the n-fold way, the residence- time algorithm, the stochastic simulation algorithm, the next reaction method, and Gillespie’s algorithm, where the final two are the most commonly referred to algorithms in the biosciences. As the computational cost of exact algorithms scales linearly with the number of jump events (i.e. reactions), such methods be- come computationally intense for even moderately sized systems. This issue looms large when many sample paths are needed in a Monte Carlo setting. To address this, approximate algorithms, and notably the class of algorithms termed “tau-leaping” methods introduced by Gillespie [21] in the chemical kinetic setting, have been developed with the explicit aim of greatly lowering the computational complexity of each path simulation while controlling the bias [3, 5, 6, 27, 34, 35].

33 Read more

Fixed point theorems and explicit estimates for convergence rates of continuous time Markov chains

Fixed point theorems and explicit estimates for convergence rates of continuous time Markov chains

However, their methods are not fitted for the general continuous time Markov chains, especially when the symmetric condition, coupling condition or stochastically monotone one is not satisfied. For example, the bounds of Markov chains with instantaneous states such as Kolmogorov matrix, or the regular birth and death process. In this paper, we dis- cuss this problem.

18 Read more

Model checking of continuous time Markov Chains against timed automata specifications

Model checking of continuous time Markov Chains against timed automata specifications

This paper addressed the quantitative (and qualitative) verification of a finite CTMC C against a linear real-time specification given as a deterministic timed automaton (DTA). We studied DTA with finite and Muller acceptance criteria. The key result (for finite acceptance) is that the probability of C | = A equals the reachability probability in the embedded discrete-time Markov process of a PDP . This PDP is obtained via a standard region construction. Reachability probabilities in the thus obtained PDPs are characterized by a system of Volterra integral equations of the second type and are shown to be approx- imated by a system of PDEs. For Muller acceptance criteria, the probability of C | = A equals the reachability probability of the accepting terminal SCCs in the embedded PDP . These results apply to DTA with arbitrarily (but finitely) many clocks. For single-clock DTA with finite acceptance, Pr(C | = A) is obtained by solving a system of linear equations whose coefficients are solutions of a system of ODEs. As the coefficients are in fact transient probabilities in CTMCs, this result implies that standard algorithms for CTMC analysis suffice to verify single-clock DTA specifications.

35 Read more

Simulation algorithms for continuous time Markov chain models

Simulation algorithms for continuous time Markov chain models

Continuous time Markov chains are often used in the literature to model the dy- namics of a system with low species count and uncertainty in transitions. In this pa- per, we investigate three particular algorithms that can be used to numerically simu- late continuous time Markov chain models (a stochastic simulation algorithm, explicit and implicit tau-leaping algorithms). To compare these methods, we used them to an- alyze two stochastic infection models with different level of complexity. One of these models describes the dynamics of Vancomycin-Resistant Enterococcus (VRE) infec- tion in a hospital, and the other is for the early infection of Human Immunodeficiency Virus (HIV) within a host. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have similar computational efficiency for the VRE model due to the low number of species and small number of transitions. However, we found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.

18 Read more

Some strong limit theorems for nonhomogeneous Markov chains indexed by controlled trees

Some strong limit theorems for nonhomogeneous Markov chains indexed by controlled trees

This work, motivated by Peng (), mainly considers a kind of non-uniformly bounded-degree trees and studies some strong limit properties, including the strong law of large numbers and AEP with a.e. convergence, for nonhomogeneous Markov chains indexed by a controlled tree, which permits some of the nodes to have an asymptotic in- finite degree. The outcomes can generalize some well-known results. The technical route

9 Read more

Extreme events of Markov Chains

Extreme events of Markov Chains

important cases such as Markov chains whose transition kernel normalizes under the canonical family from Heffernan and Tawn (2004) nor applies to Gaussian copulas. Our new results cover existing results and these important families as well as inverted max-stable copulas (Ledford and Tawn, 1997). Furthermore, we are able to derive additional structure for the tail chain, termed the hidden tail chain, when classical results give that the tail chain suddenly leaves extreme states and also when the tail chain is able to return to extremes states from non-extreme states. One key difference in our approach is that, while previous accounts focus on regularly varying marginal distributions, we assume our marginal distributions to be in the Gumbel domain of attraction, like Smith (1992), as with affine norming this marginal choice helps to reveal structure not apparent through affine norming of regularly varying marginals.

40 Read more

Convergence of alternating Markov chains

Convergence of alternating Markov chains

Suppose we have two Markov chains defined on the same state space. What happens if we alternate them? If they both converge to the same stationary distribution, will the chain obtained by alternating them also converge? Consideration of these questions is motivated by the possible use of two different updating schemes for MCMC estimation, when much faster convergence can be achieved by alternating both schemes than by using either singly.

6 Read more

Small sets and Markov transition densities

Small sets and Markov transition densities

The theory of general state-space Markov chains can be strongly related to the case of discrete state-space by use of the notion of small sets and associated minorization conditions. The general theory shows that small sets exist for all Markov chains on state-spaces with countably generated σ-algebras, though the minorization provided by the theory concerns small sets of order n and n-step transition kernels for some unspecified n. Partly motivated by the growing impor- tance of small sets for Markov chain Monte Carlo and Coupling from the Past, we show that in general there need be no small sets of order n = 1 even if the kernel is assumed to have a density function (though of course one can take n = 1 if the kernel density is continuous). However n = 2 will suffice for kernels with densities (integral kernels), and in fact small sets of order 2 abound in the tech- nical sense that the 2-step kernel density can be expressed as a countable sum of nonnegative separable summands based on small sets. This can be exploited to produce a representation using a latent discrete Markov chain; indeed one might say, inside every Markov chain with measurable transition density there is a dis- crete state-space Markov chain struggling to escape. We conclude by discussing complements to these results, including their relevance to Harris-recurrent Markov chains and we relate the counterexample to Tur´an problems for bipartite graphs.

21 Read more

On the complexity of computing maximum entropy for Markovian Models

On the complexity of computing maximum entropy for Markovian Models

We investigate the complexity of computing entropy of various Markovian models including Markov Chains (MCs), Interval Markov Chains (IMCs) and Markov Decision Processes (MDPs). We consider both entropy and entropy rate for general MCs, and study two algorithmic ques- tions, i.e., entropy approximation problem and entropy threshold problem. The former asks for an approximation of the entropy/entropy rate within a given precision, whereas the latter aims to decide whether they exceed a given threshold. We give polynomial-time algorithms for the approximation problem, and show the threshold problem is in P CH 3 (hence in PSPACE) and

14 Read more

Perturbation analysis in finite LD‐QBD processes and applications to epidemic models

Perturbation analysis in finite LD‐QBD processes and applications to epidemic models

In this paper, our interest is in the perturbation analysis of level-dependent quasi-birth-and-death (LD-QBD) processes, which constitute a wide class of structured Markov chains. An LD-QBD process has the special feature that its space of states can be structured by levels (groups of states), so that a tridiagonal-by-blocks structure is obtained for its infinitesimal generator. For these processes, a number of algorithmic procedures exist in the literature in order to compute several performance measures while exploiting the underly- ing matrix structure; among others, these measures are related to first-passage times to a certain level L(0) and hitting probabilities at this level, the maxi- mum level visited by the process before reaching states of level L(0), and the stationary distribution. For the case of a finite number of states, our aim here is to develop analogous algorithms to the ones analyzing these measures, for their perturbation analysis. This approach uses matrix calculus and exploits the specific structure of the infinitesimal generator, which allows us to obtain addi- tional information during the perturbation analysis of the LD-QBD process by dealing with specific matrices carrying probabilistic insights of the dynamics of the process. We illustrate the approach by means of applying multitype ver- sions of the susceptible-infective (SI) and susceptible-infective-susceptible (SIS) epidemic models to the spread of antibiotic-sensitive and antibiotic-resistant bacterial strains in a hospital ward.

22 Read more

Variance Optimization for Continuous Time Markov Decision Processes

Variance Optimization for Continuous Time Markov Decision Processes

The Postal Service Company’s catalogue information system, inventory issues, and supply chain management issues are all early successful applications of the Markov decision process. Later, many real-life problems, such as sequential as- signments, machine maintenance issues, and secretarial issues, can be described as dynamic Markov Decision Processes (MDP) model. They are finally solved very well by MDP [1] [2].

15 Read more

Piecewise Deterministic Markov Processes for Continuous Time Monte Carlo

Piecewise Deterministic Markov Processes for Continuous Time Monte Carlo

One aspect of continuous-time Monte Carlo that is particularly relevant for modern applications of Bayesian statistics is that they seem well-suited to big-data problems. If we denote our target distribution by π(x) then the dynamics of these methods depend on the target through ∇ log π(x). Now if π(x) is a posterior distribution, then it will often be in product-form, where each factor relates to a data-point or set of data-points. This means that ∇ log π(x) is a sum, and hence is easy to approximate unbiasedly using sub-samples of the data. It turns out we can use these unbiased estimators within the continuous-time Monte Carlo methods without affecting their validity. That is, the algorithms will still target π(x). This is in comparison to other discrete-time MCMC methods that use sub-sampling (Welling and Teh, 2011; Bardenet et al., 2017; Ma et al., 2015; Quiroz et al., 2015), where the approximation in the sub-sampling means that the algorithms will only target an approximation to π(x). It also compares favourably with big-data methods that independently run MCMC on batches of data, and then combines the MCMC samples in some way (Neiswanger et al., 2014; Scott et al., 2016; Srivastava et al., 2015; Li et al., 2017). As the combination steps involved will also introduce some approximation error.

44 Read more

Piecewise deterministic Markov processes for continuous time Monte Carlo

Piecewise deterministic Markov processes for continuous time Monte Carlo

Auto-correlation plots for for the intercept parame- ter are shown in Figure 2. Both samplers mix quickly for the low-dimesional case (see left-hand column). However the Bouncy Particle Sampler shows negative auto-correlation. We believe this is caused by the sam- pler’s dynamics which tends to move from one tail of the posterior to the other [behaviour that is particularly pronounced for 1-dimensional unimodal target distri- butions; see Bierkens and Duncan (2017)]. As a re- sult of this negative correlation, estimates of the auto- correlation time for the Bouncy Particle Sampler are slightly small than for MALA. However, as the di- mension increases, we tend to see bigger advantages from using the Bouncy Particle Sampler – perhaps due to its nonreversible dynamics. This can be seen from the auto-correlation plots for d = 128 (see right-hand column). For this run of the two algorithms the es- timated auto-correlation times are approximately 670 for MALA and 130 for the Bouncy Particle Sampler, suggesting a 5-fold gain in efficiency from using the latter algorithm. Key to the strong performance of the Bouncy Particle Sampler for this example is the fact that we can efficienctly simulate the event times using thinning—with the method described in the Appendix; with around 30% of proposed event times being ac- cepted.

28 Read more

Perfect sampling for nonhomogeneous Markov chains and hidden Markov models

Perfect sampling for nonhomogeneous Markov chains and hidden Markov models

1. Introduction. With the introduction of their famous Coupling From the Past (CFTP) algorithm, Propp and Wilson (1996) showed how to use a form of backward coupling to simulate exact samples from the invariant distribution of an ergodic Markov chain in a.s. finite time. Foss and Tweedie (1998), in part appealing to a construction of Murdoch and Green (1998), showed that existence of an a.s. finite backward coupling time characterizes uniform geometric ergodicity of the Markov chain in question. The present papers extends these ideas in the context of nonhomogeneous Markov chains, a setting which to date has received little attention, perhaps due to a lack of appropriate formulation or applications.

35 Read more

Continuous time Markov decision process of a single aisle warehouse

Continuous time Markov decision process of a single aisle warehouse

In this report, a CTMDP with the average cost criterion was developed of an automated machine that serves a single-aisle warehouse and build-up stations to obtain a new picking and storage policy. The CTMDP minimises the average weighted waiting time of containers. An e-optimal policy and an approximation to its gain was found using the value iteration algorithm. From this policy a heuristic policy was extracted. The heuristic policy, e-optimal policy and the policies from the literature were tested in the MDP model and the case study using the value iteration algorithm and a discrete-event simulation of the sorting system respectively. To be able to simulate the policies, the heuristic policy had to be interpreted to be able to implement it. Moreover, some extra rules had to be implemented for parts of the sorting system which were disregarded in the CTMDP. Thereafter, a robustness analysis was done by running three scenarios with a different workload.

74 Read more

Randomized and Relaxed Strategies in Continuous-Time Markov Decision Processes

Randomized and Relaxed Strategies in Continuous-Time Markov Decision Processes

If we look at the control problem, where the transition rate q(dy | x, a) depends on the action a, we face at least two different standard models. If the actions can be changed only at the jump epochs (such actions may also be randomized), then the model is called an “exponential semi-Markov decision process” (ESMDP). If, e.g., two actions a 1 and a 2 are chosen with probabilities p(a 1 ) and p(a 2 ) = 1 − p(a 1 ), then the sojourn time in state x has the cumulative distribution func- tion (CDF) 1 − p(a 1 )e −q x (a 1 ) + p(a

31 Read more

Non reversible Metropolis Hastings

Non reversible Metropolis Hastings

To remedy these issues, in this paper MH is extended to ‘non-reversible Metropolis-Hastings’ (NRMH) which allows for non-reversible transitions. The main idea of this paper is to modify the acceptance ratio, which is further discussed in Sect. 2. For pedagogical purposes, the theory is first devel- oped for discrete state spaces. It is shown how the acceptance probability of MH, can be adjusted so that the resulting chain in NRMH has a specified ‘vorticity’, and therefore, will be non-reversible. Any Markov chain satisfying a sym- metric structure condition can be constructed by NRMH, which establishes the generality of the algorithm. Theoretical advantages of finite state space non-reversible chains in terms of improved asymptotic variance and large deviations esti- mates are briefly mentioned in Sect. 3. In particular we recall a result from Sun et al. (2010) that adding non-reversibility decreases asymptotic variance. Also we present a variation on a result by Rey-Bellet and Spiliopoulos (2014) on large deviations from the invariant distribution.

17 Read more

An Uninterrupted Time Markov Process on the Transition Probability Matrix with Markov Branching Process

An Uninterrupted Time Markov Process on the Transition Probability Matrix with Markov Branching Process

for all . It is a routine to prove that the form a family of transition probabilities generating our limit process . From (3.5) and the stochastical monotonicity it follows immediately that the family of probability measures is tight for every bounded interval . Hence (3.6) defines a Feller semi-group, i.e., is bounded and continuous for all bounded and continuous functions . This completes the proof of Theorem 3.4. chains an have transition probabilities satisfying condition (i) and (ii) of Proposition 3.2. These two conditions can be replaced by the weaker condition there exists a constant such that

22 Read more

Markov chain approximations to scale functions of Lévy processes

Markov chain approximations to scale functions of Lévy processes

and [5, Chapter VII], while an excellent account of available numerical methods for computing them can be found in [21, Chapter 5]. Examples, few, but important, of processes when the scale functions can be given analytically, appear e.g. in [17]; and in certain cases it is possible to construct them indirectly [21, Chapter 4] (i.e. not starting from the basic datum, which we consider here to be the characteristic triplet of X). Finally, in the special case when X is a positive drift minus a compound Poisson subordinator, we note that numerical schemes for (finite time) ruin/survival probabilities (expressible in terms of scale functions), based on discrete-time Markov chain approximations of one sort or another, have been proposed in the literature (see [36, 16, 11, 15] and the references therein).

26 Read more

Estimating lead time using Markov Chains

Estimating lead time using Markov Chains

Another test was comparing results of the model when using the estimated transition prob- abilities and the real probabilities (computed based on information of each order). As a result, using the probabilities and durations computed from data of each order has a positive effect on the lead time, in the sense that the performance is better: mean absolute error decreses with 2.5 units and, most importantly, the mean relative error reduces to 6% for training data and with similar results for testing data. This also means that the performance of the model cannot be improved more than that, only by improving the transition probabilities and durations.

48 Read more

Show all 10000 documents...