改善深层神经网络第一周-Gradient Checking
来源:互联网 发布:网络票务 编辑:程序博客网 时间:2024/05/18 02:55
Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud–whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user’s account has been taken over by a hacker. (你是全球范围内开展移动支付的团队的一部分,并被要求建立一个深度学习模式来检测欺诈行为 - 每当有人付款时,你想要查看付款是否有欺诈行为,例如用户帐户已被黑客占用。)
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company’s CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, “Give me a proof that your backpropagation is actually working!” To give this reassurance, you are going to use “gradient checking”.(*
但是反向传播实施起来相当具有挑战性,有时会有错误。因为这是关键任务应用程序,所以贵公司的首席执行官要真正确定你的反向传播实施是正确的。你的首席执行官说:“给我一个证明你的反向传播实际上是有效的!”为了让这个保证,你将使用“梯度检查”。*)
Let’s do it!
# Packagesimport numpy as npfrom testCases import *from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
1) How does gradient checking work?
Backpropagation computes the gradients
Because forward propagation is relatively easy to implement, you’re confident you got that right, and so you’re almost 100% sure that you’re computing the cost
Let’s look back at the definition of a derivative (or gradient)(倒数或者梯度的定义):
If you’re not familiar with the “
We know the following:
∂J∂θ is what you want to make sure you’re computing correctly.- You can compute
J(θ+ε) andJ(θ−ε) (in the case thatθ is a real number), since you’re confident your implementation forJ is correct.
Lets use equation (1) and a small value for
2) 1-dimensional gradient checking
Consider a 1D linear function
You will implement code to compute
您将实现代码来计算
The diagram above shows the key computation steps: First start with
Exercise: implement “forward propagation” and “backward propagation” for this simple function. I.e., compute both
# GRADED FUNCTION: forward_propagationdef forward_propagation(x, theta): """ Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x """ ### START CODE HERE ### (approx. 1 line) J = np.dot(theta,x) ### END CODE HERE ### return J
x, theta = 2, 4J = forward_propagation(x, theta)print ("J = " + str(J))
J = 8
Expected Output:
# GRADED FUNCTION: backward_propagationdef backward_propagation(x, theta): """ Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta """ ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dtheta
x, theta = 2, 4dtheta = backward_propagation(x, theta)print ("dtheta = " + str(dtheta))
dtheta = 2
Expected Output:
** dtheta ** 2Exercise: To show that the backward_propagation()
function is correctly computing the gradient
Instructions:
- First compute “gradapprox” using the formula above (1) and a small value of
1.
2.
3.
4.
5.
- Then compute the gradient using backward propagation, and store the result in a variable “grad”
- Finally, compute the relative difference between “gradapprox” and the “grad” using the following formula:
You will need 3 Steps to compute this formula:
- 1’. compute the numerator using np.linalg.norm(…)
- 2’. compute the denominator. You will need to call np.linalg.norm(…) twice.
- 3’. divide them.
- If this difference is small (say less than
# GRADED FUNCTION: gradient_checkdef gradient_check(x, theta, epsilon = 1e-7): """ Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = theta + epsilon # Step 1 thetaminus = theta - epsilon # Step 2 J_plus = forward_propagation(x, thetaplus) # Step 3 J_minus = forward_propagation(x, thetaminus) # Step 4 gradapprox = (J_plus - J_minus)/(2*epsilon) # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad = backward_propagation(x, theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference
x, theta = 2, 4difference = gradient_check(x, theta)print("difference = " + str(difference))
The gradient is correct!difference = 2.91933588329e-10
Expected Output:
The gradient is correct!
Congrats, the difference is smaller than the backward_propagation()
.
Now, in the more general case, your cost function
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Let’s look at your implementations for forward propagation and backward propagation.
def forward_propagation_n(X, Y, parameters): """ Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) """ # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache
Now, run backward propagation.
def backward_propagation_n(X, Y, cache): """ Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) #不需要乘2 db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True) #前面4改为1 gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody’s perfect! Let’s implement gradient checking to verify if your gradients are correct.(你在欺诈检测测试集中获得了一些结果,但你并不是100%确定你的模型。没有人是完美的!让我们实施梯度检查,以验证你的梯度是否正确。)
How does gradient checking work?.
As in 1) and 2), you want to compare “gradapprox” to the gradient computed by backpropagation. The formula is still:
However, dictionary_to_vector()
” for you. It converts the “parameters” dictionary into a vector called “values”, obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.(但是,dictionary_to_vector()
”。它将“参数”字典转换成一个称为“值”的向量,通过将所有参数(W1,b1,W2,b2,W3,b3)整形成向量并连接它们而获得。)
The inverse function is “vector_to_dictionary
” which outputs back the “parameters” dictionary.
You will need these functions in gradient_check_n()
We have also converted the “gradients” dictionary into a vector “grad” using gradients_to_vector(). You don’t need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]
:
1. Set np.copy(parameters_values)
2. Set
3. Calculate forward_propagation_n(x, y, vector_to_dictionary(
))
.
- To compute J_minus[i]
: do the same thing with
- Compute
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]
. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1’, 2’, 3’), compute:
# GRADED FUNCTION: gradient_check_ndef gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon) ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad - gradapprox) # Step 1' denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference > 1e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference
X, Y, parameters = gradient_check_n_test_case()cost, cache = forward_propagation_n(X, Y, parameters)gradients = backward_propagation_n(X, Y, cache)difference = gradient_check_n(parameters, gradients, X, Y)
[93mThere is a mistake in the backward propagation! difference = 1.18909130233e-07[0m
Expected output:
** There is a mistake in the backward propagation!** difference = 0.285093156781It seems that there were errors in the backward_propagation_n
code we gave you! Good that you’ve implemented the gradient check. Go back to backward_propagation
and try to find/correct the errors (Hint: check dW2 and db1). Rerun the gradient check when you think you’ve fixed it. Remember you’ll need to re-execute the cell defining backward_propagation_n()
if you modify the code. (看来我们给你的backward_propagation_n
代码有错误!很好,你已经实施了梯度检查。返回到backward_propagation
并尝试查找/更正错误(提示:检查dW2和db1)。当你认为你已经修复了,重新运行渐变检查。记住,如果修改代码,则需要重新执行定义“backward_propagation_n()”的单元格。)
Can you get gradient check to declare your derivative computation correct? Even though this part of the assignment isn’t graded, we strongly urge you to try to find the bug and re-run gradient check until you’re convinced backprop is now correctly implemented. (你能得到梯度检查来声明你的派生计算是正确的吗?即使这部分任务没有分级,但我们强烈建议您尝试查找错误并重新运行梯度检查,直到您确信backprop现在已正确实施。)
Note
- Gradient Checking is slow! Approximating the gradient with
- Gradient Checking, at least as we’ve presented it, doesn’t work with dropout. You would usually run the gradient check algorithm without dropout to make sure your backprop is correct, then add dropout. (梯度检查,至少我们已经介绍过了,不适用于dropout。你通常会运行梯度检查算法而不drop,以确保你的backprop是正确的,然后添加dropout。)
Congrats, you can be confident that your deep learning model for fraud detection is working correctly! You can even use this to convince your CEO. :)
What you should remember from this notebook:
- Gradient checking verifies closeness between the gradients from backpropagation and the numerical approximation of the gradient (computed using forward propagation).
- Gradient checking is slow, so we don’t run it in every iteration of training. You would usually run it only to make sure your code is correct, then turn it off and use backprop for the actual learning process.
- 改善深层神经网络第一周-Gradient Checking
- 吴恩达 改善深层神经网络 第一周
- 改善深层神经网络第一周-Initialization
- 改善深层神经网络第一周-Regularization
- DeepLearing学习笔记-改善深层神经网络(第一周作业-1)
- 改善深层神经网络:超参数调节(第一周)笔记
- 改善深层神经网络第一周-Regularization 若干错误修正
- DeepLearing学习笔记-改善深层神经网络(第一周作业-2-正则化)
- DeepLearing学习笔记-改善深层神经网络(第一周作业-3-梯度校验)
- 吴恩达 第二课 第一周 3 Gradient Checking
- 改善深层神经网络
- 第四周深层神经网络
- Coursera 吴恩达 Deep Learning 第2课 Improving Deep Neural Networks 第一周 编程作业代码 Gradient Checking
- 第2次课改善深层神经网络:超参数优化、正则化以及优化
- 第2次课改善深层神经网络:超参数优化、正则化以及优化
- 改善深层神经网络第二周-Optimization methods
- 学习笔记-利用Gradient Checking检查神经网络模型
- 改善深层神经网络第二周-Optimitation and methods 错误修改
- ssm 项目遇到mapper 里循环两次取参数的问题
- Pat 1028. 人口普查(20)
- 【2017.11.28】编译可能产生的原因
- 数据挖掘的概念
- Linux下使用Tomcat遇到的一些问题
- 改善深层神经网络第一周-Gradient Checking
- 二叉树
- XMind8在linux环境下内存溢出的另一种解决办法
- POJ 1149 迈克卖猪问题(PIGS) 最大流
- SCAN扫描算法 java实现
- POJ1502---MPI Maelstrom(Dijkstra最短路基础题)
- Qt-QPalette类的用法
- HTML及CSS学习随手记 day1 (head first)
- 拥塞控制分析之Vegas