Andrew Ng's deeplearning Course1Week2 Programming Questions(编程题)

来源:互联网 发布:如何正确看待体检数据 编辑:程序博客网 时间:2024/05/16 15:30

Logistic Regression with a Neural Network mindset

  • Build the general architecture of a learning algorithm, including:
    • Initializing parameters
    • Calculating the cost function and its gradient
    • Using an optimization algorithm (gradient descent)
  • Gather all three functions above into a main model function, in the right order.

1 - Packages

  • numpy is the fundamental package for scientific computing with Python.
  • h5py is a common package to interact with a dataset that is stored on an H5 file.
  • matplotlib is a famous library to plot graphs in Python.
  • PIL and scipy are used here to test your model with your own picture at the end.
import numpy as npimport matplotlib.pyplot as pltimport h5pyimport scipyfrom PIL import Imagefrom scipy import ndimagefrom lr_utils import load_dataset%matplotlib inline
# Loading the data (cat/non-cat)train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
_orig代表要预处理

Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.

深度学习中的许多bug多数都是矩阵或向量维度不匹配造成的。如果匹配的话,你将会消除很多bug。

--我发觉实际做过后的确存在这样的问题!

2 - Shape、Reshape

### START CODE HERE ### (≈ 3 lines of code)m_train = train_set_x_orig.shape[0]m_test = test_set_x_orig.shape[0]num_px = train_set_x_orig.shape[1]### END CODE HERE ###print ("Number of training examples: m_train = " + str(m_train))print ("Number of testing examples: m_test = " + str(m_test))print ("Height/Width of each image: num_px = " + str(num_px))print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")print ("train_set_x shape: " + str(train_set_x_orig.shape))print ("train_set_y shape: " + str(train_set_y.shape))print ("test_set_x shape: " + str(test_set_x_orig.shape))print ("test_set_y shape: " + str(test_set_y.shape))
# Reshape the training and test examples### START CODE HERE ### (≈ 2 lines of code)train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],-1).Ttest_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],-1).T### END CODE HERE ###print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))print ("train_set_y shape: " + str(train_set_y.shape))print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))print ("test_set_y shape: " + str(test_set_y.shape))print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))

train_set_x = train_set_x_flatten/255.test_set_x = test_set_x_flatten/255.

Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px  num_px  3, 1).

A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (bcd, a) is to use:

X_flatten = X.reshape(X.shape[0], -1).T      # X.T is the transpose of X

Remember:

Common steps for pre-processing a new dataset are:

  • Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
  • Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
  • "Standardize" the data

3 - General Architecture of the learning algorithm


Key steps: In this exercise, you will carry out the following steps:

- Initialize the parameters of the model- Learn the parameters for the model by minimizing the cost  - Use the learned parameters to make predictions (on the test set)- Analyse the results and conclude

4 - Building the parts of our algorithm

The main steps for building a Neural Network are:

  1. Define the model structure (such as number of input features)
  2. Initialize the model's parameters
  3. Loop:
    • Calculate current loss (forward propagation)
    • Calculate current gradient (backward propagation)
    • Update parameters (gradient descent)

You often build 1-3 separately and integrate them into one function we call model().

4.1 - Helper functions

# GRADED FUNCTION: sigmoiddef sigmoid(z):    """    Compute the sigmoid of z    Arguments:    z -- A scalar or numpy array of any size.    Return:    s -- sigmoid(z)    """    ### START CODE HERE ### (≈ 1 line of code)    s = 1 / (1 + np.exp(-z))    ### END CODE HERE ###        return s

4.2 - Initializing parameters

# GRADED FUNCTION: initialize_with_zerosdef initialize_with_zeros(dim):    """    This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.        Argument:    dim -- size of the w vector we want (or number of parameters in this case)        Returns:    w -- initialized vector of shape (dim, 1)    b -- initialized scalar (corresponds to the bias)    """        ### START CODE HERE ### (≈ 1 line of code)    w , b= np.zeros((dim,1)), 0    ### END CODE HERE ###    assert(w.shape == (dim, 1))    assert(isinstance(b, float) or isinstance(b, int))        return w, b


4.3 - Forward and Backward propagation

# GRADED FUNCTION: propagatedef propagate(w, b, X, Y):    """    Implement the cost function and its gradient for the propagation explained above    Arguments:    w -- weights, a numpy array of size (num_px * num_px * 3, 1)    b -- bias, a scalar    X -- data of size (num_px * num_px * 3, number of examples)    Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)    Return:    cost -- negative log-likelihood cost for logistic regression    dw -- gradient of the loss with respect to w, thus same shape as w    db -- gradient of the loss with respect to b, thus same shape as b        Tips:    - Write your code step by step for the propagation. np.log(), np.dot()    """        m = X.shape[1]        # FORWARD PROPAGATION (FROM X TO COST)    ### START CODE HERE ### (≈ 2 lines of code)    A = sigmoid(np.dot(w.T,X) + b)            # compute activation    cost = -(np.sum(np.dot(Y,np.log(A).T)+np.dot((1-Y),np.log(1-A).T)))/m      # compute cost    ### END CODE HERE ###        # BACKWARD PROPAGATION (TO FIND GRAD)    ### START CODE HERE ### (≈ 2 lines of code)    dw = (np.dot(X,(A-Y).T))/m    db = (np.sum(A-Y))/m    ### END CODE HERE ###    assert(dw.shape == w.shape)    assert(db.dtype == float)    cost = np.squeeze(cost)    assert(cost.shape == ())        grads = {"dw": dw,             "db": db}        return grads, cost

4.4 - Optimization


# GRADED FUNCTION: optimizedef optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):    """    This function optimizes w and b by running a gradient descent algorithm        Arguments:    w -- weights, a numpy array of size (num_px * num_px * 3, 1)    b -- bias, a scalar    X -- data of shape (num_px * num_px * 3, number of examples)    Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)    num_iterations -- number of iterations of the optimization loop    learning_rate -- learning rate of the gradient descent update rule    print_cost -- True to print the loss every 100 steps        Returns:    params -- dictionary containing the weights w and bias b    grads -- dictionary containing the gradients of the weights and bias with respect to the cost function    costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.        Tips:    You basically need to write down two steps and iterate through them:        1) Calculate the cost and the gradient for the current parameters. Use propagate().        2) Update the parameters using gradient descent rule for w and b.    """        costs = []        for i in range(num_iterations):                        # Cost and gradient calculation (≈ 1-4 lines of code)        ### START CODE HERE ###         grads, cost = propagate(w, b, X, Y)        ### END CODE HERE ###                # Retrieve derivatives from grads        dw = grads["dw"]        db = grads["db"]                # update rule (≈ 2 lines of code)        ### START CODE HERE ###        w = w - learning_rate * dw        b = b - learning_rate * db        ### END CODE HERE ###                # Record the costs        if i % 100 == 0:            costs.append(cost)                # Print the cost every 100 training examples        if print_cost and i % 100 == 0:            print ("Cost after iteration %i: %f" %(i, cost))        params = {"w": w,              "b": b}        grads = {"dw": dw,             "db": db}        return params, grads, costs

4.5 - Predict

# GRADED FUNCTION: predictdef predict(w, b, X):    '''    Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)        Arguments:    w -- weights, a numpy array of size (num_px * num_px * 3, 1)    b -- bias, a scalar    X -- data of size (num_px * num_px * 3, number of examples)        Returns:    Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X    '''        m = X.shape[1]    Y_prediction = np.zeros((1,m))    w = w.reshape(X.shape[0], 1)        # Compute vector "A" predicting the probabilities of a cat being present in the picture    ### START CODE HERE ### (≈ 1 line of code)    A = sigmoid(np.dot(w.T,X) + b)    ### END CODE HERE ###    for i in range(A.shape[1]):                # Convert probabilities A[0,i] to actual predictions p[0,i]        ### START CODE HERE ### (≈ 4 lines of code)        if A[:,i] >= 0.5:            Y_prediction[:, i] = 1        else:            Y_prediction[:, i] = 0        ### END CODE HERE ###        assert(Y_prediction.shape == (1, m))        return Y_prediction

Remember: You've implemented several functions that:

  • Initialize (w,b)
  • Optimize the loss iteratively to learn parameters (w,b):
    • computing the cost and its gradient
    • updating the parameters using gradient descent
  • Use the learned (w,b) to predict the labels for a given set of examples

5 - Merge all functions into a model



# GRADED FUNCTION: modeldef model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):    """    Builds the logistic regression model by calling the function you've implemented previously        Arguments:    X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)    Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)    X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)    Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)    num_iterations -- hyperparameter representing the number of iterations to optimize the parameters    learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()    print_cost -- Set to true to print the cost every 100 iterations        Returns:    d -- dictionary containing information about the model.    """        ### START CODE HERE ###        # initialize parameters with zeros (≈ 1 line of code)    w, b = initialize_with_zeros(X_train.shape[0])    # Gradient descent (≈ 1 line of code)    parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost=True)        # Retrieve parameters w and b from dictionary "parameters"    w = parameters["w"]    b = parameters["b"]        # Predict test/train set examples (≈ 2 lines of code)    Y_prediction_test = predict(w, b, X_test)    Y_prediction_train = predict(w, b, X_train)    ### END CODE HERE ###    # Print train/test Errors    print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))    print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))        d = {"costs": costs,         "Y_prediction_test": Y_prediction_test,          "Y_prediction_train" : Y_prediction_train,          "w" : w,          "b" : b,         "learning_rate" : learning_rate,         "num_iterations": num_iterations}        return d

#Run the following cell to train your model.d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)


6 - Further analysis 

Reminder: In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate 
α determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.

learning rate is: 0.01train accuracy: 99.52153110047847 %test accuracy: 68.0 %-------------------------------------------------------learning rate is: 0.001train accuracy: 88.99521531100478 %test accuracy: 64.0 %-------------------------------------------------------learning rate is: 0.0001train accuracy: 68.42105263157895 %test accuracy: 36.0 %-------------------------------------------------------learning rate is: 0.1train accuracy: 65.55023923444976 %test accuracy: 34.0 %-------------------------------------------------------
可以看到不管变大还是变小,性能都不是线性增改的。

  • Different learning rates give different costs and thus different predictions results.
  • If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
  • A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
  • In deep learning, we usually recommend that you:
    • Choose the learning rate that better minimizes the cost function.
    • If your model overfits, use other techniques to reduce overfitting. 

Remember from this assignment:

  1. Preprocessing the dataset is important.
  2. You implemented each function separately: initialize(), propagate(), optimize(). Then you built a model().
  3. Tuning the learning rate (which is an example of a "hyperparameter") can make a big difference to the algorithm. You will see more examples of this later in this course!


阅读全文
0 0
原创粉丝点击