Machine Learning Diagnostic
来源:互联网 发布:dd是什么意思网络用语 编辑:程序博客网 时间:2024/06/08 09:51
Once we have done some trouble shooting for errors in our predictions by:
- Getting more training examples
- Trying smaller sets of features
- Trying additional features
- Trying polynomial features
- Increasing or decreasing λ
Evaluating hypothesis
1.split up the data into two sets: a training set and a test set randomly. (7:3)
2.Learn
3.Compute the test set error
The test set error
Given many models with different polynomial degrees
1.Break down dataset into the three sets:
- Training set: 60%
- Cross validation set: 20%
- Test set: 20%
2.Calculate three separate error values for the three different sets
- Optimize the parameters in Θ using the training set for each polynomial degree.
- Find the polynomial degree d with the least error using the cross validation set.
- Estimate the generalization error using the test set with
*This way, the degree of the polynomial d has not been trained using the test set.
**We might generally expect
Diagnosing Bias vs. Variance
High bias (underfitting):
both
High variance (overfitting):
choose parameter λ
In order to choose the model and the regularization term λ, we need to:
- Create a list of lambdas (i.e. λ∈{0,0.01,0.02,0.04,0.08,0.16,0.32,0.64,1.28,2.56,5.12,10.24});
- Create a set of models with different degrees or any other variants.
- Iterate through the λs and for each λ go through all the models to learn some Θ.
- Compute the cross validation error using the learned Θ (computed with λ) on the JCV(Θ) without regularization or λ = 0.
- Select the best combo that produces the lowest error on the cross validation set.
Learning Curves
Experiencing high bias:
Low training set size: causes Jtrain(Θ) to be low and JCV(Θ) to be high.
Large training set size: causes both Jtrain(Θ) and JCV(Θ) to be high with Jtrain(Θ)≈JCV(Θ).
getting more training data will not (by itself) help much.
Experiencing high variance:
Low training set size: Jtrain(Θ) will be low and JCV(Θ) will be high.
Large training set size: Jtrain(Θ) increases with training set size and JCV(Θ) continues to decrease without leveling off. Also, Jtrain(Θ) < JCV(Θ) but the difference between them remains significant.
getting more training data is likely to help.
In practice, especially for small training sets, when you plot learning curves to debug your algorithms, it is often helpful to average across multiple sets of randomly selected examples to determine the training error and cross validation error.
Deciding What to Do Next Revisited
- Getting more training examples: Fixes high variance
- Trying smaller sets of features: Fixes high variance
- Adding features: Fixes high bias
- Adding polynomial features: Fixes high bias
- Decreasing λ: Fixes high bias
- Increasing λ: Fixes high variance.
Diagnosing Neural Networks
- A neural network with fewer parameters is prone to underfitting.
It is also computationally cheaper.- A large neural network with more parameters is prone to overfitting.
It is also computationally expensive. In this case you can use regularization (increase λ) to address the overfitting.
- A large neural network with more parameters is prone to overfitting.
Using a single hidden layer is a good starting default. You can train your neural network on a number of hidden layers using your cross validation set. You can then select the one that performs best.
Model Complexity Effects:
- Lower-order polynomials (low model complexity) have high bias and low variance. In this case, the model fits poorly consistently.
- Higher-order polynomials (high model complexity) fit the training data extremely well and the test data extremely poorly. These have low bias on the training data, but very high variance.
- Machine Learning Diagnostic
- Code for Machine Learning Diagnostic
- machine learning
- Machine Learning
- machine learning
- Machine Learning
- machine learning
- Machine Learning
- machine learning
- machine learning
- Machine Learning
- Machine Learning
- machine-learning
- machine-learning
- Machine Learning
- Machine Learning
- Machine Learning
- machine learning
- node.js开发
- [Java]2017.03.05
- HOG特征学习
- 汉诺塔算法解析
- hdu4825 Xor Sum【Trie、Xor】
- Machine Learning Diagnostic
- linux动态链接库导出函数控制
- python中的面向对象
- CodeForces
- 需求分析的图形工具(层次方框 warnier IPO)
- Butterknife 8.4.0版本的使用:
- hdu4292
- Linux 如何打开端口
- C++Primer第五版 第十三章习题答案(1~10)