HMM training

来源:互联网 发布:精美图案的3b编程 编辑:程序博客网 时间:2024/06/18 14:23

2 road lines:

  • EM algorithm(Baum-Welch)
  • Gradient Descent

The main advantage of the Baum-Welch algorithm (and hence the ML training) is due to its simplicity and the fact that requires no parameter tuning. Furthermore compared to standard gradient descent, even for ML training, the Baum-Welch algorithm achieves significantly faster convergence rates. On the other hand gradient descent (especially in the case of large models) requires careful search in the parameter space for an appropriate learning rate in order to achieve the best possible performance.

References:

  1. Pantelis G. Bagos, Theodore Liakopoulos, Stavros J. Hamodrakas: Faster Gradient Descent Training of Hidden Markov Models, Using Individual Learning Rate Adaptation. ICGI 2004: 40-52
  2. Baldi, P., Chauvin, Y.: Smooth On-Line Learning Algorithms for Hidden Markov Models. Neural Comput. 6(2) (1994) 305-316
0 0
原创粉丝点击