Markov Chains and Stochastic Processes
1. Markov Property
P(X_{t+1}=j | X_t=i, X_{t-1},...) = P(X_{t+1}=j | X_t=i).
Future depends on present state only.
2. Transition Matrix
Matrix P with entries P_ij where rows sum to 1.
n-step transition probabilities are entries of P^n.
3. State Class Concepts
- communicating states
- irreducible chain
- periodic/aperiodic
- transient vs recurrent states
4. Stationary Distribution
pi stationary if pi P = pi and sum pi_i = 1.
For irreducible aperiodic finite chains, distribution converges to pi regardless of initial state.
5. Detailed Balance (Reversible Chains)
Condition: pi_i P_ij = pi_j P_ji.
Useful in MCMC construction.
6. Worked Example
Two-state weather chain:
P = [[0.8, 0.2],
[0.3, 0.7]]
Solve pi = pi P -> pi = [0.6, 0.4]. Long-run rainy probability is 0.4.
7. CS Applications
- PageRank
- queueing and caching models
- hidden Markov models
- reinforcement learning transition dynamics
Exercises
- Compute
P^3for a 2-state chain and interpret. - Find stationary distribution of a 3-state chain.
- Give example of periodic chain and explain non-convergence behavior.
- Show irreducibility for a fully connected graph random walk.