Deep Learning in Neural Networks: An Overview 之 摘要-前言

来源:互联网 发布:cef js 交互 编辑:程序博客网 时间:2024/06/04 18:15

摘要-前言

本系列为此综述学习记录及个人心得:


  • 摘要-前言
    • 摘要原文
    • 摘要译文
    • 注解
    • 前言原文
    • 前言译文

摘要原文

In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation,and indirect search for short programs encoding deep and large networks.

摘要译文

近年来,深度人工神经网络(包括循环神经网络)在模式识别和机器学习获得了巨大的成功。这篇综述比较详细地总结了这方面的相关工作,而这些内容大部分来自2000年以前。浅层和深度学习模式的区别在于他们的权值分配方法的深度性,它们是学习程度的关键点,是输入与输出之间的因果关系。我回顾了有监督的深度学习(也概括了BP网络),无监督学习,强化学习和进化计算,以及间接搜索具有复杂关系的项目代码

注解

循环神经网络(Recurrent Neural Networks,RNNs)。

自然语言处理(Natural Language Processing, NLP

有监督的深度学习方式(deep supervised learning)

B-P(反向传播)网络(backpropagation)

前言原文

This is the preprint of an invited Deep Learning (DL) overview. One of its goals is to assign credit
to those who contributed to the present state of the art. I acknowledge the limitations of attempting to achieve this goal. The DL research community itself may be viewed as a continually evolving, deep network of scientists who have influenced each other in complex ways. Starting from recent DL results, I tried to trace back the origins of relevant ideas through the past half century and beyond,sometimes using “local search” to follow citations of citations backwards in time. Since not all DL publications properly acknowledge earlier relevant work, additional global search strategies were employed, aided by consulting numerous neural network experts. As a result, the present preprint mostly consists of references. Nevertheless, through an expert selection bias I may have missed important work. A related bias was surely introduced by my special familiarity with the work of my own DL research group in the past quarter-century. For these reasons, this work should be viewed as merely a snapshot of an ongoing credit assignment process. To help improve it, please do not hesitate to send corrections and suggestions to juergen@idsia.ch.

前言译文

这是一篇非正式的有关深度学习的综述。它的目的之一就是为了表彰那些为这门学科作了贡献的人。我承认实现这一目标具有局限性。DL研究群体本身就可能被认为是一个科学家们理论在不断进化的、互相影响的结果。我不仅从最近的DL研究内容查找,还尝试着追溯过去半个世纪及其之后的相关思想,有时使用局部搜索来回溯引用的引用。由于并不是所有的人都承认早期的相关工作,通过咨询众多的神经网络专家,采用了额外的全球搜索策略。但我还可能缺少了一些重要的相关内容。因此,目前非正式的综述主要包括了一些参考文献。在过去的20多年里,我对自己的DL研究工作特别熟悉,这无疑是一种偏见。出于这些原因,这篇文章应该仅仅被视为是正在进行行业回顾的临时篇章。为了帮助改进它,请发送更正和建议到juergen@idsia.ch。

阅读全文
0 0