利用反向传播训练多层神经网络的原理
来源:互联网 发布:7.0wow mac 插件 编辑:程序博客网 时间:2024/06/14 03:02
原文链接戳此处
Principles of training multi-layer neural network using backpropagation
The project describes teaching process of multi-layer neural network employing backpropagation algorithm. To illustrate this process the three layer neural network with two inputs and one output,which is shown in the picture below, is used:
这篇文章主要描述的是应用反向传播算法来训练多层神经网络的过程。为了能更加形象的说明训练过程,下面展示了一个有两个输入和一个输出的3层神经网络。如下:(不应该是4层么)
Each neuron is composed of two units. First unit adds products of weights coefficients and input signals. The second unit realise nonlinear function, called neuron activation function. Signal
e is adder output signal, andy=f(e) is output signal of nonlinear element. Signal y is also output signal of neuron.
每个神经元都是由两个部分组成。第一个部分是权重系数与所有输入信号的乘积。(Ps.这里的输入信号不仅仅是指输入层,而是所有第
To teach the neural network we need training data set. The training data set consists of input signals (
x1andx2 ) assigned with corresponding target (desired output)z . The network training is an iterative process. In each iteration weights coefficients of nodes are modified using new data from training data set. Modification is calculated using algorithm described below: Each teaching step starts with forcing both input signals from training set. After this stage we can determine output signals values for each neuron in each network layer. Pictures below illustrate how signal is propagating through the network, Symbolsw(xm)n represent weights of connections between network inputxm and neuron n in input layer. Symbolsyn represents output signal of neuronn .
为了训练神经网络,我们需要一个训练集。这个训练集是由输入信号量(
Propagation of signals through the hidden layer. Symbols
wmn represent weights of connections between output of neuronm and input of neuronn in the next layer.
信号量的传播是通过隐藏层来进行的。符号
Propagation of signals through the output layer.
信号量通过输出层传播。
In the next algorithm step the output signal of the network
y is compared with the desired output value (the target), which is found in training data set. The difference is called error signalδ of output layer neuron.
在算法的下一个步骤中,神经网络的最终输出值
It is impossible to compute error signal for internal neurons directly, because output values of these neurons are unknown. For many years the effective method for training multiplayer networks has been unknown. Only in the middle eighties the back propagation algorithm has been worked out. The idea is to propagate error signal
δ (computed in single teaching step) back to all neurons, which output signals were input for discussed neuron.
当然,直接来计算内部隐藏层神经元的误差值几乎是不可能的,因为这些隐藏层神经元的输出值是未知的。并且很多年来,训练多层神经网络的有效方法一直不被人们所知晓。直到在80年代中期,反向传播算法才凸显出来。该算法的思想就是将训练步骤中最终得来的误差值
The weights’ coefficients
wmn used to propagate errors back are equal to this used during computing output value. Only the direction of data flow is changed (signals are propagated from output to inputs one after the other). This technique is used for all network layers. If propagated errors came from few neurons they are added. The illustration is below:
用于计算反向传播误差
When the error signal for each neuron is computed, the weights coefficients of each neuron input node may be modified. In formulas below
df(e)/de represents derivative of neuron activation function (which weights are modified).
当所有神经元的误差值都计算完成时,每一个输入神经元的权重系数将可能发生改变。其计算公式
Coefficient
η affects network teaching speed. There are a few techniques to select this parameter. The first method is to start teaching process with large value of the parameter. While weights coefficients are being established the parameter is being decreased gradually. The second, more complicated, method starts teaching with small parameter value. During the teaching process the parameter is being increased when the teaching is advanced and then decreased again in the final stage. Starting teaching process with low parameter value enables to determine weights coefficients signs.
系数
- 利用反向传播训练多层神经网络的原理
- 反向传播(backpropagation)算法的多层神经网络训练过程
- 使用反向传播算法训练多层神经网络(图示)
- 使用反向传播算法(back propagation)训练多层神经网络
- 使用反向传播训练神经网络
- 神经网络反向传播的数学原理
- 神经网络反向传播的数学原理
- 神经网络反向传播的数学原理
- 多层神经网络与反向传播算法
- 多层神经网络与反向传播算法
- 神经网络的反向传播
- 感性认识神经网络的反向传播
- 神经网络快速入门:什么是多层感知器和反向传播?
- 神经网络及卷积神经网络的训练——反向传播算法
- 再谈神经网络反向传播原理
- 神经网络的反向传播BP算法
- 神经网络反向传播公式的推导
- 神经网络的反向传播算法Backpropagation
- 15算法课程 226. Invert Binary Tree
- C#简单依赖注入解析类
- C语言之unsigned char和uint8_t
- hdu1272(基础并查集)
- # prototype和__proto__分析
- 利用反向传播训练多层神经网络的原理
- 华为笔试题(7)
- phpmyadmin 1146
- web测试的思考
- JavaScript原生事件委托以及JQuery事件委托on()的代码片段
- 匿名内部类精讲
- 设备树
- 报错:1130-host ... is not allowed to connect to this MySql server 开放mysql远程连接 不使用localhost
- maven编译spark源码