新农上路
- 积分
- 99
- 学分
- 个
- 大米
- 颗
- 鳄梨
- 个
- 水井
- 尺
- 蓝莓
- 颗
- 萝卜
- 根
- 小米
- 粒
- UID
- 506491
- 注册时间
- 2019-6-19
- 最后登录
- 1970-1-1
- 在线时间
- 小时
- 好友
- 收听
- 听众
- 日志
- 相册
- 帖子
- 主题
- 分享
- 精华
升级
  93.33%
|
本楼: |
👍
0% (0)
|
|
0% (0)
👎
|
全局: |
👍 100% (1) |
|
0% (0) 👎 |
2. It speeds up gradient descent by making it require fewer iterations to get to a good solution.
reason: Feature scaling speeds up gradient descent by avoiding many extra iterations that are required when one or more features take on much larger values than the rest.
其他不选,
1. It speeds up solving for θ using the normal equation. 原因是The magnitude of the feature values are insignificant in terms of computational cost.
4. The cost function J(θ) for linear regression has no local optima. |
评分
-
查看全部评分
|