Markov Chains (prob140.MarkovChain
)¶
Constucting¶
Explicitly assigning probabilities
In [1]: mc_table = Table().states(make_array("A", "B")).transition_probability(make_array(0.5, 0.5, 0.3, 0.7))
In [2]: mc_table
Out[2]:
Source | Target | Probability
A | A | 0.5
A | B | 0.5
B | A | 0.3
B | B | 0.7
In [3]: mc = mc_table.toMarkovChain()
In [4]: mc
Out[4]:
A B
A 0.5 0.5
B 0.3 0.7
Using a transition function
In [5]: def identity_transition(x,y):
...: if x==y:
...: return 1
...: return 0
...:
In [6]: transMatrix = Table().states(np.arange(1,4)).transition_function(identity_transition)
In [7]: transMatrix.toMarkovChain()
Out[7]:
1 2 3
1 1.0 0.0 0.0
2 0.0 1.0 0.0
3 0.0 0.0 1.0
Utilities¶
MarkovChain.distribution (starting_condition, n) |
Finds the distribution of states after n steps given a starting condition |
MarkovChain.steady_state () |
The stationary distribution of the markov chain |
MarkovChain.mean_first_passage_times () |
Finds the mean time it takes to reach state j from state i |
MarkovChain.prob_of_path (starting_condition, ...) |
Finds the probability of a path given a starting condition |
MarkovChain.log_prob_of_path (...) |
Finds the log-probability of a path given a starting condition |
MarkovChain.mixing_time ([cutoff, jump, p]) |
Finds the mixing time |
MarkovChain.accessibility_matrix () |
Return matrix showing whether state j is accessible from state i. |
Simulations¶
MarkovChain.move (state) |
Transitions one step from the indicated state |
MarkovChain.simulate_chain (starting_condition, n) |
Simulates a path of length n following the markov chain with the initial condition of starting_condition |
MarkovChain.empirical_distribution (...) |
Finds the empirical distribution |