NTU-Coursera机器学习:機器學習技法 (Machine Learning Techniques)

来源:互联网 发布:科比01年总决赛数据 编辑:程序博客网 时间:2024/04/27 17:16

The course extends the fundamental tools in "Machine Learning Foundations" to powerful and practical models by three directions, which includes embedding numerous features, combining predictive features, and distilling hidden features. [這門課將先前「機器學習基石」課程中所學的基礎工具往三個方向延伸為強大而實用的工具。這三個方向包括嵌入大量的特徵、融合預測性的特徵、與萃取潛藏的特徵。]

Course Syllabus

Each of the following items correspond to approximately one hour of video lecture. [以下的每個小項目對應到約一小時的線上課程]
Embedding Numerous Features [嵌入大量的特徵]
-- Linear Support Vector Machine [線性支持向量機]
-- Dual Support Vector Machine [對偶支持向量機]
-- Kernel Support Vector Machine [核型支持向量機]
-- Soft-Margin Support Vector Machine [軟式支持向量機]
-- Kernel Logistic Regression [核型羅吉斯迴歸]
-- Support Vector Regression
[支持向量迴歸]

Combining Predictive Features [融合預測性的特徵]
-- Bootstrap Aggregation [自助聚合法]
-- Adaptive Boosting [漸次提昇法]
-- Decision Tree [決策樹]
-- Random Forest [隨機森林]
-- Gradient Boosted Decision Tree [梯度提昇決策樹]

Distilling Hidden Features [萃取隱藏的特徵]
-- Neural Network [類神經網路]
-- Deep Learning [深度學習]
-- Radial Basis Function Network
[逕向基函數網路]
-- Matrix Factorization [矩陣分解]

Summary [總結]


延伸閱讀

先修書籍

  • Learning from Data: A Short Course , Abu-Mostafa, Magdon-Ismail, Lin, 2013.

參考文獻

201, 202, 203, 204:
  • Learning from Data e-Chapter 8: Support Vector Machine, 可由 http://book.caltech.edu/bookforum/ 免費下載(帳號:mooc 密碼: massive)
  • A training algorithm for optimal margin classifiers. Boser, Guyon, Vapnik, COLT 1992.
205, 206:
  • Kernel Logistic Regression and the Import Vector Machine . Zhu, Hastie, NIPS 2001.
  • Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. Platt, 1999.
  • A Note on Platt's Probabilistic Outputs for Support Vector Machines. Lin, Lin, Weng, MLJ 2007.
  • SVM versus Least Squares SVM (Ye and Xiong)
  • A Tutorial on Support Vector Regression (Smola and Scholkopf)
207, 208:
  • A linear ensemble of individual and blended models for music rating prediction (Chen et al.)
  • Bagging predictors (Breiman)
  • A short introduction to boosting (Freund and Schapire)
209, 210, 211:
  • Classification and regression trees (overview of decision tree by Loh)
  • Classification and regression trees (book of CART by Breiman et al.)
  • Random forest (Breiman)
  • Greedy Function Approximation: A Gradient Boosting Machine (Friedman)
212, 213:
  • Learning from Data e-Chapter 7: Neural Networks, 可由 http://book.caltech.edu/bookforum/ 免費下載(帳號:mooc 密碼: massive)
  • Stacked Denoising Autoencoders: Learning Useful Representations ina Deep Network with a Local Denoising Criterion (Vincent et al.)
214:
  • Learning from Data e-Chapter 6: Similarity Models, 可由 http://book.caltech.edu/bookforum/ 免費下載(帳號:mooc 密碼: massive)
  • Three Learning Phases for Radial-basis-function Networks (Schwenker et al.)
215:
  • Matrix Factorization Techniques for Recommender Systems (Koren et al.)

关于Machine Learning更多讨论与交流,敬请关注本博客和新浪微博songzi_tea.

1 0