Deep Learning:正则化(七)
来源:互联网 发布:java 微信授权登录 编辑:程序博客网 时间:2024/06/04 18:58
Multi-Task Learning
Multi-task learning (Caruana, 1993) is a way to improve generalization by pooling the examples (which can be seen as soft constraints imposed on the parameters) arising out of several tasks.
In the same way that additional training examples put more pressure on the parameters of the model towards values that generalize well, when part of a model is shared across tasks, that part of the model is more constrained towards good values (assuming the sharing is justified), often yielding better generalization.
The above figure illustrates a very common form of multi-task learning, in which different supervised tasks (predicting y(i) given x) share the same input x, as well as some intermediate-level representation h(shared) capturing a common pool of factors. The model can generally be divided into two kinds of parts and associated parameters:
(1) Task-specific parameters (which only benefit from the examples of their task to achieve good generalization). These are the upper layers of the neural network in Fig.
(2) Generic parameters, shared across all the tasks (which benefit from the pooled data of all the tasks). These are the lower layers of the neural network in Fig.
Improved generalization and generalization error bounds (Baxter, 1995) can be achieved because of the shared parameters, for which statistical strength can be greatly improved (in proportion with the increased number of examples for the shared parameters, compared to the scenario of single-task models).
- Of course this will happen only if some assumptions about the statistical relationship between the different tasks are valid, meaning that there is something shared across some of the tasks.
- From the point of view of deep learning, the underlying prior belief is the following: among the factors that explain the variations observed in the data associated with the different tasks, some are shared across two or more tasks.
- Deep Learning:正则化(七)
- Deep Learning:正则化(一)
- Deep Learning:正则化(二)
- Deep learning:正则化(三)
- Deep Learning:正则化(四)
- Deep Learning:正则化(五)
- Deep Learning:正则化(六)
- Deep Learning:正则化(八)
- Deep Learning:正则化(九)
- Deep Learning:正则化(十)
- Deep Learning:正则化(十一)
- Deep Learning:正则化(十二)
- Deep Learning:正则化(十三)
- Deep Learning:正则化(十四)
- Deep Learning 4 -正则化
- Deep learning系列(七)激活函数
- Deep learning系列(七)激活函数
- Deep learning系列(七)激活函数
- SpringMVC+Ajax异步上传图片并显示(源码)
- java 堆栈常量池,String创建图解
- linux下tomcat启动报错
- Unity 从 protobuf 2.5 升级到 3.x 【踩坑】
- Zigbee采集温度控制风扇
- Deep Learning:正则化(七)
- cocos2dx遇到的坑
- 一、MySQL安装,连接,备份与恢复
- (一) Windows环境下的Detours编译
- Handler 、 Looper 、Message之间的关系
- linux-内核-内存知识储备/链表操作
- 编译时提示 conflicting types for 错误的解决办法
- java中VO的使用
- ORA-00600[kcratr_nab_less_than_odr][1], [13271], [15011], [15130]故障解决