DeepLearing学习笔记-改善深层神经网络(第一周作业-3-梯度校验)
来源:互联网 发布:知乎液态金属散热 编辑:程序博客网 时间:2024/05/29 16:47
1-背景:
在神经网络计算过程中,对后向传播的梯度进行校验,确保其计算无误。至于,前向传播,由于相对简单,所以,一般不会出错,在前向传播的基础上利用计算出来的代价
公式原理如下:
在前向传播的基础上,我们可以获取到
2- 一维梯度校验
当
该模型的前向传播和后向传播的代码如下:
# GRADED FUNCTION: forward_propagationdef forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x """ ### START CODE HERE ### (approx. 1 line) J = theta * x ### END CODE HERE ### return Jx, theta = 2, 4J = forward_propagation(x, theta)print ("J = " + str(J))
输出结果: J = 8
由于
# GRADED FUNCTION: backward_propagationdef backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta """ ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dthetax, theta = 2, 4dtheta = backward_propagation(x, theta)print ("dtheta = " + str(dtheta))
输出结果: dtheta = 2
2-1 一维梯度校验
步骤如下:
计算梯度近似值 “gradapprox”:
θ+=θ+ε θ−=θ−ε J+=J(θ+) J−=J(θ−) gradapprox=J+−J−2ε
进行反向传播,获取梯度值 “grad”
- 按照如下公式计算两者difference :
difference=∣∣grad−gradapprox∣∣2∣∣grad∣∣2+∣∣gradapprox∣∣2(2)
范数的计算可以用np.linalg.norm(...)
当difference 足够小(<
2-2 梯度校验代码:
# GRADED FUNCTION: gradient_checkdef gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = theta + epsilon # Step 1 thetaminus = theta - epsilon # Step 2 J_plus = forward_propagation(x, thetaplus) # Step 3 J_minus = forward_propagation(x, thetaminus) # Step 4 gradapprox = (J_plus-J_minus)/(2*epsilon) # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad = backward_propagation(x, theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad)+np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference
测试:
x, theta = 2, 4difference = gradient_check(x, theta)print("difference = " + str(difference))
运行结果如下:
The gradient is correct!difference = 2.91933588329e-10
difference 明显小于阈值
3- N微梯度校验
本文采用3层神经网络做说明,模型:LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
3-1 前向传播和后向传播:
前向传播代码:
def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) """ # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache
这里的后向传播,故意在dW2和db1这里写错:
def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T)#不用乘以2 db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)#分子是1,不是4 gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients
3-2 梯度校验
对每个参数执行如下操作:
- 计算
J_plus[i]
:θ+ =np.copy(parameters_values)
θ+i =θ+i+ε - 采用
forward_propagation_n(x, y, vector_to_dictionary(
θ+ ))
计算J+i
- 对于
θ− 同理计算J_minus[i]
- 计算
gradapprox[i]=J+i−J−i2ε
gradapprox中的 gradapprox[i] 对应的是参数parameter_values[i]
的梯度近似值 。gradapprox 向量和后向传播的梯度的相似按照如下公式估算.:
3-2 梯度校验
# GRADED FUNCTION: gradient_check_ndef gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients)#将字典转为向量形式 num_parameters = parameters_values.shape[0]#参数个数 J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0]+epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0]-epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i]-J_minus[i])/(2*epsilon) ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad-gradapprox) # Step 1' denominator = np.linalg.norm(grad)+np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### #注意这里epsilon值的问题,如果是1e-6是可以的,使得梯度校验通过,epsilon值越小,反而和导数越不一致 if difference > 1e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference
测试代码:
X, Y, parameters = gradient_check_n_test_case()cost, cache = forward_propagation_n(X, Y, parameters)gradients = backward_propagation_n(X, Y, cache)difference = gradient_check_n(parameters, gradients, X, Y)
测试结果: There is a mistake in the backward propagation! difference = 0.285093156654
明显存在梯度计算问题。将反向梯度计算的dW2和db1进行修改,重新运行: There is a mistake in the backward propagation! difference = 1.18855520355e-07
虽然现实出现问题,但是different值已经很接近阈值了,此时,我们单独修改计算双边梯度之后的epsilon=1e-6,不修改判断的阈值。
输出: Your backward propagation works perfectly fine! difference = 8.26588225515e-09
所以,要注意看difference值是否和阈值相距很大。本文为何就差那么一些,导致需要修改epsilon,可能是由于代价函数在局部存在毛刺,导致估算值和后向梯度计算结果,存在超于阈值的偏差。另外,relu的导数在0处有歧义,也可能导致此处的不够准确。
- DeepLearing学习笔记-改善深层神经网络(第一周作业-3-梯度校验)
- DeepLearing学习笔记-改善深层神经网络(第一周作业-1)
- DeepLearing学习笔记-改善深层神经网络(第一周作业-2-正则化)
- DeepLearing学习笔记-改善深层神经网络(第二周作业-优化方法)
- DeepLearing学习笔记-改善深层神经网络(第三周作业-TensorFlow使用)
- DeepLearing学习笔记-改善深层神经网络(第二周作业-优化方法)[转载]
- DeepLearing学习笔记-改善深层神经网络(第三周- 将batch-norm拟合进神经网络)
- DeepLearing学习笔记-改善深层神经网络(第三周- 超参数调试、正则化)
- 改善深层神经网络:超参数调节(第一周)笔记
- 吴恩达 改善深层神经网络 第一周
- 改善深层神经网络第一周-Initialization
- 改善深层神经网络第一周-Regularization
- 改善深层神经网络第一周-Gradient Checking
- 改善深层神经网络第一周-Regularization 若干错误修正
- [DeeplearningAI笔记]改善深层神经网络_深度学习的实用层面1.10_1.12/梯度消失/梯度爆炸/权重初始化
- DeepLearing学习笔记-Sigmoid函数的梯度
- [DeeplearningAI笔记]改善深层神经网络_优化算法2.1_2.2_mini-batch梯度下降法
- 深度学习与神经网络-吴恩达-第一周 梯度检查
- phpstorm快捷键
- HDU2082 找单词(多重背包)
- 上传文件到Linux服务器
- Qt 实用小技巧1--exe文件添加图标、避免子控件继承父控件的背景
- 服务器上安装Anaconda、tensorflow、opencv吐血总结
- DeepLearing学习笔记-改善深层神经网络(第一周作业-3-梯度校验)
- jenkins(二)jenkins+maven+git 构建第一个简单的job
- 修改apache2的默认端口,默认网站目录和默认主页文档
- 【P1009】阶乘之和(高精求阶乘之和)
- JSP内置对象
- p6spy-部署在系统中
- ASC 46
- 浅析Mybatis架构
- 110. Balanced Binary Tree