Markov Diagrams

The term "Markov Chain," invented by Russian mathematician Andrey Markov, is used across many applications to represent a stochastic process made up of a sequence of random variables representing the evolution of a system. Events are "chained" or "linked" serially together though memoryless transitions from one state to another. The term "memoryless" is used because past events are forgotten, as they are irrelevant; an event or state is dependent only on the state or event that immediately preceded it.

=Concept and Methodology= The concept behind the method is that given a system of states with transitions between them, the analysis will give the probability of being in a particular state at a particular time. If some of the states are considered to be unavailable states for the system, then availability/reliability analysis can be performed for the system as a whole.

=Discrete Markov Chains: Limiting Probabilities=

Transition Matrix
A system has a finite number of states {0, 1, 2…,N} and transition from state to state is random. The matrix shows the potential inputs and outputs from one state to another to describe transitions of a Markov chain. P(X(n+1)=j│X,n=i)=Pij where  0≦Pij≦1



Markov Chain Diagram
Markov chain diagrams can be used to label events and transitions based upon a transition matrix.

Chapman-Kolmogorov Equation
The Chapman-Kolmogorov equation was realized and defined independently by British mathematician Sydney Chapman and Russian mathematician Andrew Kolmogorov. It can be used to provide the transitional densities of a Markov sequence.

Let pi(n)=P(Xn=i)

then:


 * P(X(n+1)=j) = $$\sum_{i \mathop =0}^{N}P$$ (Xn+1 = j|Xn = i)

so:


 * Pj(n+1) = $$\sum_{i \mathop =0}^{N}P$$ (Xi(n) Pij

With vector notation $$\underline{p}$$(n) = (p0(n),p1(n), ... ,pN(n))	(row vector)

$$\underline{p}$$(n+1) = p(n)$$\underline{P}$$ = ($$\underline{p}$$(n-1)$$\underline{P}$$2 = p(0)p(n+1)

Let Pij(m) = P (Xn+m = j| Xn = i) and $$\underline{p}$$(m) = Pij(m)

then:


 * $$\underline{P}$$n+m = $$\underline{P}$$(n) * $$\underline{P}$$(m) and $$\underline{P}$$(n)= $$\underline{P}$$n

Accessible and Communicating States
State j is accessible from state i, if for some m:


 * Pij(m) > 0

State i communicates with state j, if j is accessible from i and also state i is accessible from j:


 * $$\sum_{m \mathop =1}^{\infty}P$$ij(m) and $$\sum_{m \mathop =1}^{\infty}P$$ji(m)

The Markov chain is irreducible if every state i communicates with all other states and with itself.

Recurrent and Transient States
Let fi = P (starting at state i, system will return to state i)

If fi = 1, then state i is recurrent, repeated infinitely often If fi < 1, then state i is transient, and repeated returns have smaller and smaller probabilities.


 * fi = $$\sum_{m \mathop =1}^{\infty}P$$ii(m)

The Markov chain is ergodic if all states are recurrent and not periodic (there is no d>0 such that Pii(m) > 0 if and only if m is multiple of d).

Limiting Probabilities
For an irreducible, ergodic Markov chain $$ \lim_{m \to \infty}P$$ij(m) = &pi;j for all j for all j	(10.4) and limit is independent of i ( steady state probabilities):
 * Theorem:


 * 0 ≦ πj≦ 1


 * Method:



Mean Time Spent in States
Mean time spent in recurrent states = ∞

Mean time spent in transient states:


 * Sij = Starting at state i, expected number of time periods that state is j


 * Sij = [[File:MeanTimeSpentInStates.1.PNG]]

where P * contains rows and columns of transient states of matrix ▁P: S = I + P* S $$\underline{S}$$ = (I-P *)-1

=Continuous Markov Chains: Applications to Non-Repairable Systems=


 * Non-repairable component with failure rate &lambda;
 * P0(t) = P ( at time t component works)
 * P1(t) = P ( at time t component is broken)

P0 (t+ ∆ t) = (1- λ ∆ t) P0 (t) +0 P1 (t) Does not fail during ∆ t times

P1(t+ ∆ t) = λ ∆ t P0 (t) + 1 P1 (t) since P (Fails in ∆ Time) =1- e- &lambda;∆t ≈ 1- (1- $$\tfrac{\lambda\Delta t}{1!}+\tfrac{(\lambda^2(\Delta t)^2)}{2!}$$ - …) ≈ &lambda;∆t if ∆t is small

Method
The method employed to solve a continuous Markov Chain problem is a modified RK45 Runga-Kutta-Fehlberg, which is an adaptive step size Runga-Kutta method.

User Inputs
The user must provide an initial probability for each state and a transition probability between each state. The initial probabilites of all states must add up to exactly 1.0. If a transition probability between states is not given, it is assumed to be zero.

Symbol Definitions

 * αj,0 is the initial probability of being in state j (given by the user).
 * ε is the user defined tolerance (accuracy). Default should be 1e-5 and can only get smaller.
 * λl,j is the transitional failure rate into state wj from state wl
 * wl is the probability of being in the state associated with the λl,j’s
 * λj,k is the transitional failure rate leaving state wj to state wk
 * fj is the change in state probability function (for a given state wj):
 * [[File:ChgInStateProbFunt.Jpg]]

fj is not a function of time, as only constant failure rates are used. This means that the various k values calculated during the RK45 method are only functions of all the w values and the constant failure rates, λ.

The formula for the Runge-Kutta-Fehlberg method (RK45) is given next.

w0 = α

k1 = hf(ti, wi)

k2 = hf(ti+$$\tfrac{h}{4}$$, wi+$$\tfrac{k_1}{4}$$)

k3 = hf(ti+$$\tfrac{3h}{8}$$, wi+$$\tfrac{3}{32}$$k1+$$\tfrac{9}{32}$$k2)

k4 = hf(ti+$$\tfrac{12h}{13}$$, wi+$$\tfrac{1932}{2197}$$k1-$$\tfrac{7200}{2197}$$k2+$$\tfrac{7296}{2197}$$k3)

k5 = hf(ti+h wi+$$\tfrac{439}{216}$$k1-8k2+$$\tfrac{3680}{513}$$k3-$$\tfrac{845}{4104}$$k4)

k6 = hf(ti+$$\tfrac{h}{2}$$ wi+$$\tfrac{8}{27}$$k1-2k2+$$\tfrac{3544}{2565}$$k3-$$\tfrac{1859}{4104}$$k4-$$\tfrac{11}{40}$$k5)

wi+1 = wi+$$\tfrac{25}{216}$$k1+$$\tfrac{1408}{2565}$$k3-$$\tfrac{2197}{4104}$$k4-$$\tfrac{1}{5}$$k5)

w'i+1 = wi+$$\tfrac{16}{135}$$k1+$$\tfrac{6656}{12825}$$k1+$$\tfrac{28561}{56430}$$k4-$$\tfrac{9}{50}$$k5-$$\tfrac{2}{55}$$k6)

R = $$\tfrac{1}{h}|$$w'i+1-wi+1|

&delta; = 0.84 * $$\tfrac{\varepsilon}{R}^\tfrac{1}{4}$$

If R≤&epsilon; then keep w as the current step solution and move to the next step with step size &delta;h

If R>&epsilon; then recalculate the current step with step size &delta;h. The above method is for each individual state, and not for the system as a whole. The w is the equivalent of the probability of being in a particular state, where the subscript i represents the time based variation. This still has to be done for all the states in the system, which later on is represented by the subscript j (for each state).

Detailed Methodology

 * 1) Generate an initial step size h from the available failure rates (1% of the smallest MTTF).
 * 2) Use the RK45 method on all states simultaneously using the given h. (This means that all states must have their k1 values calculated/used together, then k2, then k3, etc).
 * 3) If all calculations are within tolerance, keep results (RK4, so the w without the hat) and increase h to the smallest of the increases generated by the method. If some of the calculations are not within tolerance, decrease the step size to the smallest of the decreases generated by the method and recalculate with the new h. h should not be increased to more than double, so s should have a stipulation on it that forbids from that occurring. Be aware that s may become infinite if the difference between the RK4 and the RK5 is zero. This should be addressed as well with a catch of some sort to make s = 2 in that case. (So basically if s is calculated to be greater than 2 for any state, make it equal to 2 for that state).
 * 4) Repeat steps 2 & 3 as necessary.

If there are multiple phases, then steps 1-4 need to be performed for each phase where the initial probability of being in a state is equal to the final value from the previous phase.

This methodology provides the ability to give availability and unavailability metrics for the system, as well as point probabilities of being in a certain state. For the reliability metrics, the methodology differs in that each unavailable state is considered a "sink." In other words, all transitions from unavailable to available states are ignored. This could be calculated simultaneously with the availability using the same step sizes as generated there.

The two results that need to be stored are the time and the corresponding probability of being in the state for each state.