Markov Chain
来源:互联网 发布:软件测试好吗 编辑:程序博客网 时间:2024/05/02 02:58
http://www.52nlp.cn/lda-math-mcmc-%E5%92%8C-gibbs-sampling1
Java:
void markovConverge() {double x[][] = new double[][] { { 0.6500, 0.2800, 0.0700 }, { 0.1500, 0.6700, 0.1800 },
{ 0.1200, 0.3600, 0.5200 } };
// todo:使用matlab来写
final int ITERARTION = 100000;
// double pi[] = new double[] { 0.7, 0.2, 0.1 };
// double sample[] = new double[10000];
int walkStates[] = new int[ITERARTION];
int pre_state = 2;// top
// sample[0] = pi[pre_state];
for (int i = 1; i < ITERARTION; i++) {
double u = Math.random();// 换成norm呢?
int state;
if (u < x[pre_state][0]) {
state = 0;// top
} else if (u < x[pre_state][1] + x[pre_state][0]) {
state = 1;// mid
} else {
state = 2;// low
}
// sample[i] = pi[pre_state] * x[pre_state][state];
walkStates[i] = pre_state;
pre_state = state;
}
double p1, p2, p3;
p1 = p2 = p3 = 0;
for (int i = 0; i < ITERARTION - 1; i++) {
if (walkStates[i] == 0) {
p1 += 1.0 / (ITERARTION - 1);
} else if (walkStates[i] == 1) {
p2 += 1.0 / (ITERARTION - 1);
} else {
p3 += 1.0 / (ITERARTION - 1);
}
}
System.err.println(p1 + " " + p2 + " " + p3);
}
我写了段matlab代码,通过Q,求出pi。
Q = [0.65 0.28 0.07;0.15 0.67 0.18;0.12 0.36 0.52];
ITERARTION = 100000;
walkStates=zeros(ITERARTION,1);
pre_state = 2;
for i = 1:ITERARTION
u = rand;
if u <= Q(pre_state,1)
state = 1;
elseif u <= Q(pre_state,1) + Q(pre_state,2)
state = 2;
else
state = 3;
end
walkStates(i) = pre_state;
pre_state = state;
end
begin = 90000;
pi = [length(find(walkStates(begin:ITERARTION,:)==1))/(ITERARTION-begin),length(find(walkStates(begin:ITERARTION,:)==2))/(ITERARTION-begin),length(find(walkStates(begin:ITERARTION,:)==3))/(ITERARTION-begin)]
- Markov Chain
- Markov Chain
- chain rule 到 Markov chain
- Markov chain Monte Carlo
- Absorbing markov chain
- Markov Chain Monte Carlo
- Markov chain的基本知识
- Markov Chain(bate)
- Markov chain 学习
- 马尔可夫链(Markov Chain)
- Markov Chain算法笔记
- Markov Chain Monte Carlo
- MCMC(Markov Chain Monte Carlo)
- 什么是Markov chain Monte Carlo?
- Markov chain马氏链名词理解
- Markov Chain Monte Carlo方法总结
- Markov chain& MCMC&Gibbs Sampling 总结
- Markov chain& MCMC&Gibbs Sampling 总结
- 一个引号引起的血案
- 十个优秀的C语言学习资源推荐
- 设计要一目了然
- nettier datasource 1
- 网络视频源地址抓包分析(2)
- Markov Chain
- 重读《Agile Retrospective敏捷回顾》一书
- 【hdu1823】【二维线段树】Luck and Love
- iOS图片倒影效果的2种实现
- 安装zabbix遇到的问题
- 一个c++程序员关于跳槽话题的随想
- 基于node.js的构建工具grunt.js
- FIND、GREP
- HDU 1421 搬寝室(动态规划)