BP神经网络工作原理

来源:互联网 发布:福利rtmp网络串流地址 编辑:程序博客网 时间:2024/05/29 19:26

The whole BP Neural Network computation repeat below procedures:

  1. Forward Propagation
  2. Compute cost
  3. Backward Propagation
  4. Update Parameters

Today I want to do a summary for them per my understanding since have learned long time ago but sometimes cannot remember much detail of each step.

Firstly, let’s get an overview understanding by below picture about the whole computation procedure about FP and BP:

这里写图片描述

Some diagram notations here:

  • Every Rectangle represents a single hidden layer in NN
  • Black rectangles represents Forward Propagation computation sequence
  • Red rectangles represents Backward Propagation computation sequence
  • The formulas inside each rectangle is the computation performs for each layer in FP or BP and we will discuss them in later sections

See here Denotation for all the denotations used in the above picture.


Forward Propagation

Forward Propagation is the sequence which computes from Left (input X) of the NN to the right (Y^).

For a NN with depth L, actually it repeats below computation procedure for the first L1 layer:

  1. Input: output of previous layer A[1]
  2. Compute linear output Z[]
    Z[]=W[]A[1](1)
  3. Output: activation of current layer A[]. Here g represents activation function, like Relu, tanh, sigmoid, we use relu here for illustration:

    A[]=g(Z[])(2)

the activation output of each layer is the input of the next layer, it’s like a chain.

As in the layer L we need to compute the possibility that the i training example belongs to Label Y, so we use sigmoid function in the last layer. The sigmoid function maps the infinite input to a range [0, 1] which meets our request. So in our program, we can do a map for this possibility, say >=0.5 is means the image is a cat image.

But in practice, we will also store the intermediate output as well as parameter for each layer in a cache, as they are needed when doing BP, so as you can see in the above picture:

linear_cache[]=(A[1],W[],b[])activation_cache[]=(Z[])

python numpy implementation

formula (1):

def linear_forward(A, W, b):...Z = np.dot(W, A) + breturn Z

Compute Cost (Loss/Error)

Here the cost function defines “How well our algorithm performs when our prediction is Y^ while the actual class is Y”. The less the cost is, the better our algorithm works.

From another angle, you can think of it as the error between our prediction and the actual value, lower cost value means our prediction has much higher accuracy, thus works much better.

The cross-entropy cost J can be computed using the following formula:

J=1mi=1m(y(i)log(a[L](i))+(1y(i))log(1a[L](i)))(3)

python numpy implementation

formula (3):

def compute_cost(AL, Y):...    m = Y.shape[1]    cost = -np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1-Y), np.log(1-AL)))/m    cost = np.squeeze(cost)    return cost

Backward Propagation

BP is critical in the whole NN algorithm, It computes parameter partial derivative with respect to Cost function J. And these parameter derivatives will be used to update the all the parameters (W,b) in all layers which will introduce in later sections. the updated parameters will be used in next iteration to compute the cost again which will reduce the cost comparing to cost in previous iteration.

The BP algorithm does computation from right most layer L to the left most layer (first layer) - see the picture at the beginning.

Within each layer (a red rectangle in the picture), the BP algorithm computes two kinds of derivatives: activation derivative and linear parameter derivative

Activation derivative

dZ[]=dA[]g(Z[])(4)

g is the derivative which depends on which activation function used in current layer (sigmoid/relu/tanh).

Linear derivative

dZ[] will be used to compute below three outputs:

dW[l]=LW[l]=1mdZ[l]A[l1]T(5)

db[l]=Lb[l]=1mi=1mdZ[l](i)(6)

dA[l1]=LA[l1]=W[l]TdZ[l](7)

a bit trick here is the initial derivative dA[L] in BP computation graph of the L layer, it’s not computed by formula (7), the formula for this initial parameter is

dA[L]=(YA[L]1Y1A[L])(8)

python numpy implementation

formula (5/6/7):

def linear_backward(dZ, cache):    """    Implement the linear portion of backward propagation for a single layer (layer l)    Arguments:    dZ -- Gradient of the cost with respect to the linear output (of current layer l)    cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer    Returns:    dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev    dW -- Gradient of the cost with respect to W (current layer l), same shape as W    db -- Gradient of the cost with respect to b (current layer l), same shape as b    """    A_prev, W, b = cache    m = A_prev.shape[1]    dW = np.dot(dZ, A_prev.T)/m    db = np.sum(dZ, axis=1, keepdims=True)/m    dA_prev = np.dot(W.T, dZ)    return (dA_prev, dW, db)

Update parameters

Since all parameter derivatives in all layers are now available, so we can get the updated parameters by below formulas using gradient decent:

W[l]=W[l]α dW[l]

b[l]=b[l]α db[l]


Summary

By now it looks like the whole NP algorithm is more clear, it repeat below steps. And after each iteration, the cost should be reduced.


ForwardPropagationW[]/b[]UpdateParametersA[L]dW[]/db[]ComputeCostJBackwardPropagation

Actually the number of iteration is some hyper parameter with gradient decent algorithm in some opensource ML framework, like tensorflow. The more iterations we go through this algorithm we may get lower cost, best fit parameters and also higher prediction accuracy in training-set for our model, but may also cause overfiting.

原创粉丝点击
热门问题 老师的惩罚 人脸识别 我在镇武司摸鱼那些年 重生之率土为王 我在大康的咸鱼生活 盘龙之生命进化 天生仙种 凡人之先天五行 春回大明朝 姑娘不必设防,我是瞎子 在12306买不到下铺怎么办有 地铁票买反了怎么办 香港买错特惠票怎么办 到达迪拜t3 后怎么办 海藻面膜调多了怎么办 被鸡爪子抓伤了怎么办 被鸡抓伤肿了怎么办 护士电子化没有激活码怎么办 窗帘盒螺丝掉了怎么办 窗帘的环扣掉了怎么办 门式起重吊装行车脱轨怎么办 在日本丢了东西怎么办 在日本钱包丢了怎么办 被起诉后没钱还怎么办 分期付款卖车打不起车款怎么办 地铁票买多了怎么办 工伤陪护费没有发票怎么办 工伤医疗费报销单位不盖章怎么办 家里的led灯坏了怎么办 吊顶led灯坏了怎么办 客厅空了一面墙怎么办 轨道灯的轨道不够长怎么办 奔驰大灯不亮了怎么办 led顶灯不亮了怎么办 吸顶灯led灯坏了怎么办 车底盘塑料被刮怎么办 汽车门电机坏了怎么办 宁波北仑普高差三分该怎么办 上班的地方甲醛味很重怎么办 公司不给员工交社保怎么办 户口转到学校毕业了怎么办 外地户口转到北京档案怎么办 隧道防水板过紧怎么办 到国企没报到证怎么办 车子锁了油箱盖能开怎么办 单位不接受档案和户口怎么办 完税凭证弄丢了怎么办 育种玉米公本早了怎么办 网银转账打错了怎么办 转账名字打错了怎么办 普通转账名字打错了怎么办