Machine Learning——错题整理(第二周)

来源:互联网 发布:书籍封面设计软件 编辑:程序博客网 时间:2024/05/17 22:10

Which of the following are reasons for using feature scaling?


A.It prevents the matrix XTX (used in the normal equation) from being non-invertable (singular/degenerate).

B.It speeds up gradient descent by making it require fewer iterations to get to a good solution.

C.It speeds up gradient descent by making each iteration of gradient descent less expensive to compute.

D.It is necessary to prevent the normal equation from getting stuck in local optima.


正确答案是B. It speeds up gradient descent by making it require fewer iterations to get to a good solution.

【解析】Feature scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much larger values than the rest.The cost function J(θ) for linear regression has no local optima.The magnitude of the feature values are insignificant in terms of computational cost.

阅读全文
0 0
原创粉丝点击