# Chapter 2 Discrete-Time Markov Models

(1)

## Discrete-Time Markov Models

n

0

1

10

11

0

1

10

0

1

10

11

10

11

0

1

9

10

Definition 2.1.

n

P.XnC1

n

n1

0

P.XnC1

n

n

P.XnC1

n

P.X1

0

0

1

n1

n

nC1

0

1

n1

P.XnC1

n

### D i/ is called a one-step tran-

sition probability of the DTMC at time n. Equation (2.2) implies that, for time-

### In this chapter we shall consider only time-homogeneous DTMCs with finite state space S D f1; 2; : : : ; N g. We shall always mean time-homogeneous DTMC

V.G. Kulkarni, Introduction to Modeling and Analysis of Stochastic Systems, Springer Texts in Statistics, DOI 10.1007/978-1-4419-1772-0 2,

 Springer Science+Business Media, LLC 2011c

5

(2)

i;j

P.XnC1

n

2

i;j

1;1

1;2

1;3

1;N

2;1

2;2

2;3

2;N

3;1

3;2

3;3

3;N

N;1

N;2

N;3

N;N

### The matrix P in the equation above is called the one-step transition probability

matrix, or transition matrix for short, of the DTMC. Note that the rows correspond

i;j

i;j

i;2

i;3

n

Example 2.1.

n

3;1

2;3

### The transition diagram for this DTMC is shown in Figure

2.1. Note that it has no

3;2

### D 0:

(3)

1 2 3 .50

.30 .10

.90

.55

.20 .45

Fig. 2.1 Transition diagram of the DTMC in Example2.1.

Theorem 2.1.

i;j

an

n

1.

i;j

2.

N

j D1

i;j

Proof.

i;j

N j D1

i;j

N j D1

P.XnC1

n

P.XnC1

n

(2.6)

nC1

n

(4)

Example 2.2.

n

n

n

n

n

n

P.YnC1

n

n1

0

n

nC1

P.YnC1

n

n1

0

1;0

Example 2.3.

(5)

n

n

n

1;2

1;3

1;1

1;1

1;2

1;3

1;1

2;2

3;3

Example 2.4.

n

n

n

n

n

n

nC1

n

n

n

n

nC1

nC1

n

n

n

n

n

n

###  1:

(6)

Table 2.1 Demand

distribution for Example2.4. k! 0 1 2 3 4

P.DnD k/ .0498 .1494 .2240 .2240 .1680 P.Dn k/ 1.000 .9502 .8008 .5768 .3528

n

n

P.XnC1

n

n1

0

P.Xn

n

n

n1

0

P.i

n

n

n1

0

P.Dn

P.XnC1

n

P.Xn

n

n

P.Dn

n

n

5;5

5;5

P.Dn

P.Dn

Example 2.5.

i

i

(7)

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

n

0

n1

n

nC1

nC1

nC1

P.XnC1

n

n1

0

1

2

1

2

1

2

1

2

n

1

2

1

2

1

2

1

2

1

1

2

2

Example 2.6.

(8)

nk

nk

nk

nC1k

P.XnC1k

nk

nC1k

P.XnC1k

nk

nC1k

P.XnC1k

nk

n1

n100

Example 2.7.

n

n

nC1

(9)

nC1

n

nC1

n

nC1

n

nC1

n

nC1

n

P.In

n

P.XnC1

n

P.Xn

nC1

n

P.InC1

P.XnC1

n

P.Xn

nC1

n

P.InC1

P.XnC1

n

P.Xn

nC1

n

P.InC1

n

Example 2.8.

(10)

n

n

n

nC1

nC1

nC1

nC1

n

nC1

nC1

nC1

n

n

nC1

n

n

P.An

k

n

P.XnC1

n

P.minfAnC1

n

P.AnC1

j

P.XnC1

n

P.minfAnC1

n

P.AnC1

1 kDK

k

P.XnC1

n

P.minfXn

nC1

n

P.AnC1

j iC1

(11)

P.XnC1

n

P.minfXn

nC1

n

P.AnC1

1 kDKiC1

k

j

1 kDj

k

0

1

K1

K

0

1

K1

K

0

K2

K1

0

1

n

1

N

i

P.X0

n

P.Xn

P.Xn

N

iD1

P.Xn

0

0

N

iD1

iP.Xn

0

(2.15)

(12)

P.Xn

0

.n/j

P.Xn

(2.16)

.n/i;j

P.Xn

0

(2.17)

i;j

.n/

1;1.n/

1;2.n/

.n/1;3

.n/1;N

2;1.n/

.n/2;2

.n/2;3

.n/2;N

3;1.n/

3;2.n/

.n/3;3

.n/3;N

.n/N;1

.n/N;2

N;3.n/

N;N.n/

.0/

.1/

.0/i;j

P.X0

0

.0/

.1/i;j

P.X1

0

i;j

.1/

.n/

1.n/

2.n/

N.n/

.n/

.n/

.0/

.0/

(13)

.n/

Theorem 2.2.

.n/

n

where

nis the

Proof.

0

1

.n/i;j

P.Xn

0

N kD1

P.Xn

n1

0

n1

0

N

kD1

.n1/i;k P.Xn

n1

0

N kD1

.n1/i;k P.Xn

n1

N

kD1

.n1/i;k P.X1

0

N kD1

.n1/i;k

k;j

(2.23)

.n/

.n1/

.2/

.1/

.3/

.2/

n

(14)

Corollary 2.1.

.n/

n

Proof.

2.5.

Corollary 2.2.

P.XnCm

n

n1

0

P.XnCm

n

ij.m/

Proof.

Theorem 2.3.

### (Chapman–Kolmogorov Equation). The n-step transition probabili-

ties satisfy the following equation, called the Chapman–Kolmogorov equation:

i;j.nCm/

N kD1

.n/i;k

.m/k;j

Proof.

P.XnCm

0

N kD1

P.XnCm

n

0

n

0

N

kD1

P.XnCm

n

n

0

2.2/

N

kD1

P.Xm

0

n

0

N

kD1

P.Xn

0

m

0

N kD1

i;k.n/

k;j.m/

.nCm/

.n/

.m/

.nCm/

.m/

.n/

(15)

.n/

.m/

2.2

Example 2.9.

2.1

3

### Using Corollary

2.1, we see that the pmf of X3

.3/

3

3

Example 2.10.

### Suppose the weather in the city of Heavenly changes according to the model in Example

2.3. Assuming that it is sunny today in the city of Heavenly, should Mr.

n

1

7

8

E.R/

7

8

1

8

7

1

7

1

1

0

6

0

6

1;3

(16)

Example 2.11.

### (Manufacturing). Consider the manufacturing operation described in Example

2.5. Suppose that both bins are empty at time 0 at the beginning of an

1

2

n

### ; n  0g be the DTMC described in Example

2.5. We see that we are

0;0.8/

8

.8/0;0

.8/0;2

Example 2.12.

2.8. Let Yn

E.Yn

E.Yn

n

n

2.8. Let An

nC1

nC1

n

n

nC1

n

E.YnC1

E.maxf0; AnC1

0;0.n/

### C X

K kD1

E.maxf0; k  1 C AnC1

.n/0;k

(17)

r

P.An

E.YnC1

.n/0;0

1

rDK

r

K kD1

0;k.n/

1 rDKC1k

r

(2.28)

n

n

2.2.

2.8, we see that

n

n

0;k.n/

### are obtained from it by using Theorem

2.2. These are then used in (2.28)

E.Yn

### / seems to tend to a limiting value. We present more on this in Section

2.5.

Table 2.2 Data for Example2.12.

r! 0 1 2 3 4 5 6 7

P.AnD r/ .3679 .3679 .1839 .0613 .0153 .0031 .0005 .0001 P.An r/ 1.000 .6321 .2642 .0803 .0190 .0037 .0006 .0001

Table 2.3 Expected packet loss in Example2.12.

n! 1 2 5 10 20 30 40 80

E.Yn/ .0000 .0003 .0063 .0249 .0505 .0612 .0654 .0681

(18)

n

1

N

j

i;j

E.Nj

0

i;j

1;1

1;2

1;3

1;N

2;1

2;2

2;3

2;N

3;1

3;2

3;3

3;N

N;1

N;2

N;3

N;N

Theorem 2.4.

n

on state space

### S D f1; 2; : : : ; N g, with transition probability matrix P . The occu-

pancy times matrix is given by

n rD0

r

Proof.

n

n

n

n

j

0

1

n

i;j

E.Nj

0

E.Z0

1

n

0

n rD0

E.Zr

0

(19)

n

rD0

P.Zr

0

n rD0

P.Xr

0

n rD0

i;j.r/

(2.32)

Example 2.13.

2.1. Com-

2.4,

10

rD0

r

Example 2.14.

### (Weather Model). Consider the three-state weather model described in Example

2.3. Suppose it is sunny in Heavenly today. Compute the expected num-

n

2.3,

n

1;3

6 rD0

r

1;3

(20)

n

n

Does the pmf of

### X

napproach a limit as

1

2

N

j

n!1P.Xn

### D j /; j 2 S: (2.34) The next question is a natural follow-up:

If the limiting distribution exists, is it unique?

### This question makes sense since it is conceivable that the limit may depend upon the starting state, or the initial distribution of the DTMC. Finally, the question of practical importance is:

If there is a unique limiting distribution, how do we compute it?

Theorem 2.5.

j

N iD1

i

i;j

and

N j D1

j

Proof.

n

P.XnC1

N

iD1

P.Xn

i;j

n!1

P.Xn

n!1P.XnC1

j

(21)

2.5

Example 2.15.

n

2.1):

2

4

10

n

n

n

n

Example 2.16.

n

(22)

2n

2n1

n

1

3

2

1

3

2

1

3

2

n

1

n

n

Definition 2.2.



1

2

N

P.X0

i

P.Xn

i

Theorem 2.6.



1

2

N

###  is a stationary

distribution if and only if it satisfies

j

N

iD1

i

i;j

(23)

and N

j D1

j

Proof.



P.X0

j

P.X1

j

P.X1

N

iD1

P.X0

i;j





P.X0

j

P.X1

N iD1

P.X0

i;j

N

iD1

i

i;j

j

1



n





2.6



### satisfies the same balance equations and normalizing equation as the limiting distribution . This yields the following corollary.

Corollary 2.3. A limiting distribution, when it exists, is also a stationary distribution.

Proof.

### Let  be a limiting distribution. Then, from Theorem

2.5, it satisfies (2.35)

2.6,



Example 2.17.

2.6

### or Corollary

2.3, we see that 

(24)

### of Example

2.15. Similarly, 

### D Œ:05; :50; :45 is a stationary distribution for the DTMC in Example

2.16, but there is no limiting distribution for this DTMC.

Example 2.18.

n

n

n!1

n

1

2

3

1

1

2

2

1

2

3

3

n

2.3, it

j

### .n/ be the number of times the DTMC visits state j over the time span f0; 1; : : : ; ng. We studied the expected value of this quantity in Section

2.4. The occupancy of state j is defined as

j

n!1

E.Nj

0

Updating...

## References

Updating...

Related subjects :