discrete Markov-chains

Top PDF discrete Markov-chains:

Topics in the theory and applications of Markov chains

Topics in the theory and applications of Markov chains

and a at the given locus, viz. (i) recurrent mutation (or immigration), emigration, and selection; and (ii) random sampling fluctuations. The second of these factors is of particular importance when the size of the population is finite, and effectively constant. In this situation finite discrete Markov chains often describe the evolutionary process with respect to the locus; for a discussion of such models we refer to Moran (1962), Chapter IV. On the other hand, as pointed out by Kimura (1955) "commonsense tells us that populations are usually so large that accidents of sampling are negligible” , so that it is often permissible
Show more

145 Read more

Some problems in stochastic analysis : Itô's formula for convex functions, interacting particle systems and Dyson's Brownian motion

Some problems in stochastic analysis : Itô's formula for convex functions, interacting particle systems and Dyson's Brownian motion

As a motivation we start by discussing the aforementioned discrete Pitman’s theorem; we identify the result and the associated pair of discrete Markov chains by looking at the dynamics of the randomly growing Young tableaux. There is no dis- crete equivalent of the construction of the BES 3 process as the radial part of the three- dimensional Brownian motion in classic probability. However, this analogue can be identified as a coupling of the so called spin process with a quantum random walk in quantum probability. Our objective is to construct a family of discrete bivariate Markov chains linking these two processes in an appropriate sense; this is done in section 3.5.
Show more

202 Read more

Efficient Computation of Renaming Functions for ρ-reversible Discrete and Continuous Time Markov Chains

Efficient Computation of Renaming Functions for ρ-reversible Discrete and Continuous Time Markov Chains

Structure of the paper. The remainder of this paper is organized as follows. Section 2 briefly recalls the general no- tions about Markov chains and supplies the definition and notation for both reversibility and ρ-reversibility that will be used throughout the following sections. Section 3 discusses in depth the conditions for a Markov chain to be ρ-reversible at the basis of our algorithm and studies the complexity of the Reversibility modulo Renaming decision problem and its relation with Graph Isomorphism. Section 4 introduces the algorithm we propose for recovering all the feasible re- naming functions mapping a Markov chain to its reversible isomorphic form. Its correctness and complexity are also discussed. In Section 5 the performance and effectiveness of this novel algorithm are demonstrated by applying it to both continuous and discrete Markov chains representing re- spectively synthetic examples and processes related to a real case study. Finally, Section 6 concludes the paper.
Show more

8 Read more

Covariance ordering for discrete andcontinuous time Markov chains

Covariance ordering for discrete andcontinuous time Markov chains

In Section 2, we recall two partial orderings for discrete time Markov chains that imply the efficiency ordering. One is Peskun ordering (1973), extended by Tierney (1998) to general state spaces, and the other is the covariance ordering introduced by Mira and Geyer (1999). Ordering Markov chains, is also important in the study of time invariance estimating equations (abbreviated TIEE), a gen- eral framework to construct estimators for a generic model (Baddeley (2000)). A criterion to study the performance of time invariance estimators is the Godambe- Heyde asymptotic variance, that is strictly connected with ordering Markov chains. Indeed, Mira and Baddeley (2001), have shown that Peskun ordering is a necessary condition for the Godambe-Heyde ordering. All the results in the literature regarding orderings of Markov Chains for MCMC or TIEE purposes (to our knowledge) are for discrete time Markov chains, and nothing has been said about continuous time. Only recently, Leisen and Mira (2008) have extended the Peskun ordering to continuous time Markov chains and, in Section 3, we recall the basic definition and theorems. Theoretically this result is important in the TIEE framework to study the performance of estimators, and could open new simulation strategies in the MCMC contest. How can a continuous time Markov chain be used to simulate a probability distribution? Leisen and Mira (2008) have intuitively answered this question in finite state state spaces by using a result that is formally proved in Section 4 of this paper. To distinguish the asymptotic variance of a continuous time Markov chains by the asymptotic variance of the discrete time Markov Chains, we use the notation v(f, Q), instead of v(f, Q, π). Relevant facts about continuous time Markov chains, the CLT, and a rigorous definition of asymptotic variance will be given in Section 3. Moreover, in Section 3, we extend covariance ordering to continuous time Markov chains and establish the equivalence between covariance ordering and efficiency of continuous time Markov chains.
Show more

16 Read more

Markov Chains for Robust Graph Based Commonsense Information Extraction

Markov Chains for Robust Graph Based Commonsense Information Extraction

Commonsense knowledge is useful for making Web search, local search, and mobile assistance behave in a way that the user perceives as “smart”. Most machine-readable knowledge bases, however, lack basic commonsense facts about the world, e.g. the property of ice cream being cold. This paper proposes a graph-based Markov chain approach to extract common-sense knowledge from Web-scale language models or other sources. Unlike previous work on information extraction where the graph representation of factual knowledge is rather sparse, our Markov chain approach is geared towards the challenging nature of commonsense knowledge when determining the accuracy of candidate facts. The experiments show that our method results in more accurate and robust extractions. Based on our method, we develop an online system that provides commonsense property lookup for an object in real time.
Show more

8 Read more

11 Conditional probability and Markov chains

11 Conditional probability and Markov chains

Interestingly, the probabilities determined in Example 13 are identical, to the accuracy to which they are expressed (4 decimal places). This seems to indicate that, in this example, after a while the values in the transition matrix will settle down to constant values. This is called the steady state of the transition matrix, denoted T s , and all two-state Markov chains possess the

30 Read more

Perfect sampling for nonhomogeneous Markov chains and hidden Markov models

Perfect sampling for nonhomogeneous Markov chains and hidden Markov models

son’s method. The only existing works on perfect simulation for nonhomogeneous chains which we know of are Glynn and Thorisson (2001) and Stenflo (2008), which respectively provide perfect sampling methods for Markov chains condi- tioned to avoid certain states, and products of transition matrices subject to a par- ticular uniform regularity assumption, which we discuss in more detail later. Our first goal is to develop more general insight into how the feasibility of CFTP for nonhomogeneous chains is related to various ergodic properties of M and P π .

35 Read more

Spectral Clustering for Graphs and Markov Chains

Spectral Clustering for Graphs and Markov Chains

clusters of files in a Software Change Impact Analysis. Elsewhere, the SVD has been per- formed on a term-document matrix to cluster terms and documents [7, 28]. In multivariate statistical analysis, the SVD can generate lower-dimensional representations of both obser- vations and variables from a multivariate data matrix. The benefit of the SVD is that it can be applied on a rectangular matrix rather than a square matrix. We shall take advantage of this when using spectral graph partitioning to obtain clusters of both vertices and edges. An interesting observation from Meila et al. [24] is that the spectral clustering can be depicted in the framework of Markov random walks on a graph structure. Solving the eigenvalue problem of the transition probability matrix of a Markov random walk on a graph can achieve the normalized cut on this graph. Here the spectral clustering is viewed as states clustering of Markov chains. A well known result proposed by Stewart [35] shows that the right-hand eigenvectors belonging to the dominant eigenvalues of the transition matrix of a Markov chain provide a means of grouping the states of the chain. This clustering method is based on the distance measure of states from the steady state. We shall incorporate these two techniques on Markov chains to obtain more comprehensive information concerning clustering states.
Show more

141 Read more

Actuarial Modelling with Mixtures of Markov Chains

Actuarial Modelling with Mixtures of Markov Chains

Although Markov chains are commonly used to model the changes of individuals’ statuses, most of these processes in reality show a non-Markov behaviour. This thesis investigates another approach to capture the heterogeneity in individuals’ health and mortality by using mixtures of Markov chains. We investigate the rates of transitions in several multi-state mod- els conditional on additional information about the process or the individual. Specifically, we develop our mixture models by considering three multi-state models: the three-state disabil- ity process, the four-state joint-life model, the phase-type physiological ageing model. Each model attracts us with different beauties. The interest in how the history of being disabled affects future probabilities of being disabled drives us to mix over the disability processes; the interest in how the death of an individual affects the future force of mortality for the spouse drives us to mix over the joint-life models; the interest in how to capture the heterogeneity of individuals’ ageing speeds drives us to mix over the physiological ageing processes.
Show more

114 Read more

Markov chains and the pricing of derivatives

Markov chains and the pricing of derivatives

In this thesis we propose a discretization scheme for stochastic processes which consist of local volatility, stochastic volatility and jumps. A main idea of the method is to construct a continuous-time Markov chain so that it converges, in the limit of an infinite number of states, to the limiting stochastic process. We use the diffusion approximation theorem in [23] to show that a multi-dimensional continuous time Markov chain converges in distribution to a multi-dimensional diffusion process. We then describe how to obtain the generator for the multi-dimensional continuous-time Markov chain, which is nothing more than a square matrix if the number of states is finite. It turns out we could obtain the probability mass function of the Markov chain by computing the exponential of the generator. The rate of convergence for the one-dimensional log-normal process has been studied in [40]. The author proved the probability kernel of a continuous-time Markov chain converges to a probability density function of the Black-Scholes diffusion process at the rate of O(h 2 ), where h is spacing between the states
Show more

138 Read more

Markov Chains in Credit Risk

Markov Chains in Credit Risk

The aim of this paper is to explain the relationship between Markov chains and credit risk. Transition probabilities need to be estimated for Markov chains in discrete time as well as in continuous time. Transition probabilities were estimated with two different methods, MLE method in continuous time, and multinomial estimator in discrete time. Observed transitions were transitions of companies between different rating categories and we were focused on the probability of transition from certain rating category to the category of default. We also observed differences between estimators in discrete and continuous time, and the better estimator was the one in continuous time. Mover- stayer was developed model in discrete time and it includes two Markov chains. We estimated parameters of the model and compare them in one example. Finally, we developed one more Markov model in discrete time, but this one is based on the beha- vioral score. In order for a model to be good, it should include second order Markov chains, but that is outside the boundaries of this work.
Show more

37 Read more

Interactive Markov Chains Analyzer

Interactive Markov Chains Analyzer

The main functionalities are to calculate the maximum and minimum time-bounded reachability probability [9], expected time, and expected steps.. Besides it is possible to extract the sch[r]

6 Read more

Markov Chains. Chapter Introduction. Specifying a Markov Chain

Markov Chains. Chapter Introduction. Specifying a Markov Chain

30 (Coffman, Kaduta, and Shepp 16 ) A computing center keeps information on a tape in positions of unit length. During each time unit there is one request to occupy a unit of tape. When this arrives the first free unit is used. Also, during each second, each of the units that are occupied is vacated with probability p. Simulate this process, starting with an empty tape. Estimate the expected number of sites occupied for a given value of p. If p is small, can you choose the tape long enough so that there is a small probability that a new job will have to be turned away (i.e., that all the sites are occupied)? Form a Markov chain with states the number of sites occupied. Modify the program FixedVector to compute the fixed vector. Use this to check your conjecture by simulation. *31 (Alternate proof of Theorem 11.8) Let P be the transition matrix of an ergodic Markov chain. Let x be any column vector such that Px = x. Let M be the maximum value of the components of x. Assume that x i = M . Show that if
Show more

66 Read more

Extreme events of Markov Chains

Extreme events of Markov Chains

Note that in Example 4 each of the tail chains is a random walk (with possible drift term), like for the asymptotically dependent case of Example 2 (i). This feature is unlike Examples 2 (ii) and (iii) which though also asymptotically independent processes have autoregressive tail chains. This shows that Example 4 illustrates two cases in a subtle boundary class where the norming functions are consistent with the asymptotic independence class and the tail chain is consistent with the asymptotic dependent class. To give an impression of the different behaviours of Markov chains in extreme states Figure 1 presents properties of the sample paths of chains for an asymptotically dependent and various asymptotically independent chains. These Markov chains are stationary with unit exponential marginal distribution and are initialised with X 0 = 10,
Show more

40 Read more

Analysis of non-reversible Markov chains

Analysis of non-reversible Markov chains

In this Chapter, we consider a Markov chain with transition kernel P and stationary distribution π with its time-reversal P ∗ on a general state space X . To tackle non- reversibility, the path taken in this chapter is in the spirit of the reversiblization ap- proach by Fill [42] and [89] as explained in Chapter 1 and stems on an additional re- versiblization procedure. More specifically, we use and develop further the celebrated Metropolis-Hastings (MH) algorithm to provide an original in-depth analysis of non- reversible chains. The aim is to investigate Metropolis-Hastings (MH) reversiblizations, and how it helps to analyze non-reversible chains. The MH algorithm, developed by Metropolis et al. [77] and Hastings [48], is a Markov Chain Monte Carlo method that is of fundamental importance in statistics and other applications, see e.g. Roberts and Rosenthal [94] and the references therein. The idea is to construct from a proposal ker- nel a reversible chain which converges to a desired distribution. Much of the literature focuses on the speed of convergence of specific algorithms, where the proposal kernel (e.g. a random walk proposal or an Ornstein-Uhlenbeck proposal) are often by itself reversible and the target density is in general not the proposal stationary measure. For example, Roberts and Tweedie [95] investigates the random walk MH with exponen- tial family target density. Hairer et al. [47] compares the theoretical performance of random walk MH and pCN algorithm with specific target density by establishing their Wasserstein spectral gap.
Show more

185 Read more

Content Authentication of English Text via Internet using Zero Watermarking Technique and Markov Model

Content Authentication of English Text via Internet using Zero Watermarking Technique and Markov Model

In this proposed watermark generation algorithm, the original text document (T) is to be provided by the author. Then text analysis process should be done using Markov model to compute the number of occurrences of the next state transitions (ns) for every present state (ps). Markov model of order three. A Matrix of transition probabilities that represents the number of occurrences of transition from a state to another is constructed according to the procedure explained in previous section (3.1) and can be computed by equation (5).

12 Read more

Multi-regime models involving Markov chains

Multi-regime models involving Markov chains

however, we look into the theory behind the test for the number of Markov chains required to satisfactorily fit a particular dataset. We explore this problem with mixtures of continuous-time Markov chains and specifically develop the theory for the test between 1 and 2 Markov chain components in the mixture. We conjecture that, similarly to Hartigan (1985), the log-likelihood ratio test statistic diverges to infinity with the sample size, contrary to the claim from Frydman (2005) that we can use standard theory to apply a chi-squared distribution with degrees of freedom equal to the difference in the number of parameters between the 1 component and 2 component mixture models. We provide evidence for our conjecture with the use of a parametric bootstrap procedure and then adapt the theory of Fukumizu (2003) to our case to definitively prove that the log-likelihood ratio test statistic does in fact diverge to infinity with the sample size. In order to develop a test for the presence of a Markov chain mixture, the next step would be to derive the limiting distribution of the log-likelihood ratio test statistic. We pursue this for a special case in the following chapter.
Show more

147 Read more

A systematic literature review of operational research methods for modelling patient flow and outcomes within community healthcare and other settings

A systematic literature review of operational research methods for modelling patient flow and outcomes within community healthcare and other settings

In a model of flow, the relevant system is viewed as comprising a set of distinct compartments or states, through which continuous matter or discrete entities move. Within healthcare applications, the entities of interest are commonly patients (although some applica- tions may consider blood samples or forms of informa- tion). Co ˆte´ (2000) identified two viewpoints from which patient flow has been understood, an operational perspec- tive and, less commonly, a clinical perspective. From an operational perspective, the states that patients enter, leave and move between are defined by clinical and administrative activities and interactions with the care system, such as consulting a physician or being on the waiting list for surgery. Such states may be each associated with a specific care setting or some other form of resource but this need not be the case. In the clinical perspective of patient flow, the states that patients enter, leave and move between are defined by some aspect of the patient’s health, for instance by whether the patient has symptomatic heart disease, or the clinical stage of a patient’s tumour. A more generic view is that the states within a flow model can represent any amalgam of activity, location, patient health and changeable demographics, say, patient age (Utley et al, 2009). A key characteristic is that the set of states and the set of transitions between states comprise a complete description of the system as modelled.
Show more

21 Read more

Discrete exterior calculus with applications to flows and spinors

Discrete exterior calculus with applications to flows and spinors

The discrete exterior calculus itself is regarded as an outstanding problem see for example [2], in fact one finds that using chains and co-chains, respectively discrete objects and cont[r]

175 Read more

Products of stochastic matrices and applications

Products of stochastic matrices and applications

Applications n are given to nonhomogeneous Markov models as positive chains, some classes of finite chains considered by Doeblin and weakly ergodic chains... KEY WORDS AND PHRASES..[r]

25 Read more

Show all 10000 documents...