DeepLearing学习笔记-改善深层神经网络(第三周作业-TensorFlow使用)

来源:互联网 发布:校宝软件 编辑:程序博客网 时间:2024/06/01 08:06

0- 背景:

采用TensorFlow的框架进行神经网络构建和结果预测

1- 环境依赖:

import mathimport numpy as npimport h5pyimport matplotlib.pyplot as pltimport tensorflow as tffrom tensorflow.python.framework import opsfrom tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict%matplotlib inlinenp.random.seed(1)

注意,如果是在windows下安装TensorFlow的话,目前只支持python3的,且是64位版,否则你可以尝试下。
安装方式直接pip install tensorflow即可。

TensorFlow基本操作:

代价函数计算:

loss=L(y^,y)=(y^(i)y(i))2(1)

y_hat = tf.constant(36, name='y_hat')#定义常量,且值为36y = tf.constant(39, name='y')        #定义常量,且值为39loss = tf.Variable((y - y_hat)**2, name='loss')  #定义变量:lossinit = tf.global_variables_initializer()#添加节点用于初始化所有的变量# the loss variable will be initialized and ready to be computedwith tf.Session() as session: # Create a session and print the output    session.run(init)#变量初始化    print(session.run(loss))# Prints the loss

运行结果:

9

TensorFlow的编码步骤:
1. 创建Tensors (即变量) ,这些Tensors均未被执行
2. 通过Tensors之间的运算操作,实现目标函数,比如代价函数
3. Tensors初始化
4. Session创建
5. Session运行,该步骤是对目标函数的执行

例如:

a = tf.constant(2)b = tf.constant(10)c = tf.multiply(a,b)print(c)

运行结果:

Tensor("Mul:0", shape=(), dtype=int32)

运行结果并不是20,这是因为上述代码仅仅是进行 ‘computation graph’的构建,并未进行计算的执行操作。需要通过下面的Session和进行run操作:

sess = tf.Session()print(sess.run(c))

输出:

20

占位符的使用:
占位符的使用,使得后期可以通过”feed dictionary”进行传值。当定义一个尚未赋值的变量的时候,就可以使用占位符。

# Change the value of x in the feed_dictx = tf.placeholder(tf.int64, name = 'x')print(sess.run(2 * x, feed_dict = {x: 3}))sess.close()

输出:
6

1-1 线性函数:

计算方程: Y=WX+b的输出,其中WX都是随机矩阵,b是一个随机向量。

假设W尺寸= (4, 3), X 尺寸= (3,1) , b尺寸= (4,1)。X的定义如下:

X = tf.constant(np.random.randn(3,1), name = "X")

线性函数的定义:

# GRADED FUNCTION: linear_functiondef linear_function():    """    Implements a linear function:             Initializes W to be a random tensor of shape (4,3)            Initializes X to be a random tensor of shape (3,1)            Initializes b to be a random tensor of shape (4,1)    Returns:     result -- runs the session for Y = WX + b     """    np.random.seed(1)    ### START CODE HERE ### (4 lines of code)    X = tf.constant(np.random.randn(3,1), name = "x")    #print(X)    W = tf.Variable(np.random.randn(4,3), name = "w")#也可以常量,因为这里有赋值    b = tf.Variable(np.random.randn(4,1), name = "b")    Y = tf.add(tf.matmul(W,X), b)    ### END CODE HERE ###     # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate    ### START CODE HERE ###    sess = tf.Session()    sess.run(tf.global_variables_initializer())#如果上述是定义成常量,就不需要这里的变量初始化操作    result = sess.run(Y)    ### END CODE HERE ###     # close the session     sess.close()    return resultprint( "result = " + str(linear_function()))

运行结果如下:

result = [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]

1-2 计算sigmoid

TensorFlow是自带常见的神经网络激活函数,如tf.sigmoid 和tf.softmax

# GRADED FUNCTION: sigmoiddef sigmoid(z):    """    Computes the sigmoid of z    Arguments:    z -- input value, scalar or vector    Returns:     results -- the sigmoid of z    """    ### START CODE HERE ### ( approx. 4 lines of code)    #创建占位符x    x = tf.placeholder(tf.float32, name = "x")    # compute sigmoid(x)    sigmoid = tf.sigmoid(x)#创建sigmoid函数节点    # Create a session, and run it. Please use the method 2 explained above.     # 采用feed_dict将z值传递给x    with tf.Session() as sess:        # Run session and call the output "result"        result = sess.run(sigmoid, feed_dict = {x: z})    ### END CODE HERE ###    return result

运行:

print ("sigmoid(0) = " + str(sigmoid(0)))print ("sigmoid(12) = " + str(sigmoid(12)))

运行结果:

sigmoid(0) = 0.5sigmoid(12) = 0.999994

1-3 代价计算

在TensorFlow中可以直接采用内置的函数计算神经网络的代价。在TensorFlow里面,我们不必对m个样本做如下的计算:a[2](i) and y(i) for i=1…m:

J=1mi=1m(y(i)loga[2](i)+(1y(i))log(1a[2](i)))(2)

而只需一行代码即可。例如cross entropy loss的计算:
tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)

在使用tf.nn.sigmoid_cross_entropy_with_logits函数使用时,需要输入 z, 和a(通过sigmoid函数计算返回)才能够计算cross entropy cost J的值:

1mi=1m(y(i)logσ(z[2](i))+(1y(i))log(1σ(z[2](i)))(2)

代码实现:

# GRADED FUNCTION: costdef cost(logits, labels):    """    Computes the cost using the sigmoid cross entropy    Arguments:    logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)    labels -- vector of labels y (1 or 0)     Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"     in the TensorFlow documentation. So logits will feed into z, and labels into y.     Returns:    cost -- runs the session of the cost (formula (2))    """    ### START CODE HERE ###     # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)    z = tf.placeholder(tf.float32, name = "logits")    y = tf.placeholder(tf.float32, name = "labels")    #定义代价函数    cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)    # Create a session    sess = tf.Session()        # Run the session (approx. 1 line).    sess.run(tf.global_variables_initializer())    cost = sess.run(cost, feed_dict = {z:logits, y:labels})    # Close the session (approx. 1 line). See method 1 above.    sess.close() # Close the session    ### END CODE HERE ###    return cost

运行测试:

logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))cost = cost(logits, np.array([0,0,1,1]))print ("cost = " + str(cost))

测试结果如下:

cost = [ 1.00538719  1.03664076  0.41385433  0.39956617]

1-4 One Hot encodings

一般,我们的y向量中的值都是从0到C-1,如此来表示分类结果,其中C是分类数。当C= 4,我们需要对y向量做如下转换:
这里写图片描述
这就是所谓 “one hot”编码。在TensorFlow中代码:
tf.one_hot(labels, depth, axis)

完整的”one hot”编码实现:

# GRADED FUNCTION: one_hot_matrixdef one_hot_matrix(labels, C):    """    Creates a matrix where the i-th row corresponds to the ith class number and the jth column                     corresponds to the jth training example. So if example j had a label i. Then entry (i,j)                      will be 1.     Arguments:    labels -- vector containing the labels     C -- number of classes, the depth of the one hot dimension    Returns:     one_hot -- one hot matrix    """    ### START CODE HERE ###    # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)    C = tf.constant(C, name="C")    # Use tf.one_hot, be careful with the axis (approx. 1 line)    one_hot_matrix = tf.one_hot(labels, depth=C, axis=0)    #注意axis值的选择,否则矩阵方向可能是错的    # Create the session (approx. 1 line)    sess = tf.Session()       # Run the session (approx. 1 line)    one_hot = sess.run(one_hot_matrix)    # Close the session (approx. 1 line). See method 1 above.    sess.close()    ### END CODE HERE ###    return one_hot

测试代码:

labels = np.array([1,2,3,0,2,1])one_hot = one_hot_matrix(labels, C = 4)print ("one_hot = " + str(one_hot))

运行结果如下:

one_hot = [[ 0.  0.  0.  1.  0.  0.] [ 1.  0.  0.  0.  0.  1.] [ 0.  1.  0.  0.  1.  0.] [ 0.  0.  1.  0.  0.  0.]]

1-5 初始化为0或1

TensorFlow的0和1的初始化方式如下:tf.ones()和tf.zeros()
入参是一个shape值,返回是一个矩阵:

# GRADED FUNCTION: onesdef ones(shape):    """    Creates an array of ones of dimension shape    Arguments:    shape -- shape of the array you want to create    Returns:     ones -- array containing only ones    """    ### START CODE HERE ###    # Create "ones" tensor using tf.ones(...). (approx. 1 line)    ones = tf.ones(shape)    # Create the session (approx. 1 line)    sess = tf.Session()    # Run the session to compute 'ones' (approx. 1 line)    ones = sess.run(ones)    # Close the session (approx. 1 line). See method 1 above.    sess.close()    ### END CODE HERE ###    return ones

测试运行:

print ("ones = " + str(ones([3])))print ("ones = " + str(ones([3,2])))

测试结果:

ones = [ 1.  1.  1.]ones = [[ 1.  1.] [ 1.  1.] [ 1.  1.]]

2 基于TensorFlow的神经网络构建

2-1 数据介绍:

手势识别的图像信息如下:

  • Training set: 1080张图 (64 by 64 pixels) 分别表示不同的0-5的数字手势(每个数字手势有180张)
  • Test set: 120张图(64 by 64 pixels)分别表示0-5的数字手势(每个数字手势20张)

注意,这是的数据集是SIGNS dataset的子集,整个SIGNS dataset包含了更多的手势。
以下是原始的图像及其表示的数字信息:
这里写图片描述

Figure 1: SIGNS dataset

数据加载:

# Loading the datasetX_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

选择一张图像进行显示:

# Example of a pictureindex = 0plt.imshow(X_train_orig[index])print ("y = " + str(np.squeeze(Y_train_orig[:, index])))

运行结果:

y = 5

这里写图片描述
同样,我们对输入的图像做扁平化和归一化处理,同时对于y标记结果做one hot 编码:

# Flatten the training and test imagesX_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).TX_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T# Normalize image vectorsX_train = X_train_flatten/255.X_test = X_test_flatten/255.# Convert training and test labels to one hot matricesY_train = convert_to_one_hot(Y_train_orig, 6)Y_test = convert_to_one_hot(Y_test_orig, 6)print ("Y_train_orig size = " + str(Y_train_orig.shape))print ("number of training examples = " + str(X_train.shape[1]))print ("number of test examples = " + str(X_test.shape[1]))print ("X_train shape: " + str(X_train.shape))print ("Y_train shape: " + str(Y_train.shape))print ("X_test shape: " + str(X_test.shape))print ("Y_test shape: " + str(Y_test.shape))

运行结果如下:

Y_train_orig size = (1, 1080)number of training examples = 1080number of test examples = 120X_train shape: (12288, 1080)Y_train shape: (6, 1080)X_test shape: (12288, 120)Y_test shape: (6, 120)

其中12288=64×64×3。
神经网络的模型:
LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX。由于是多分类问题,所以在输出层,采用的是softmax激活函数。

2-2 创建占位符

为X和Y创建占位符,用以后期传入训练数据集。

# GRADED FUNCTION: create_placeholdersdef create_placeholders(n_x, n_y):    """    Creates the placeholders for the tensorflow session.    Arguments:    n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)    n_y -- scalar, number of classes (from 0 to 5, so -> 6)    Returns:    X -- placeholder for the data input, of shape [n_x, None] and dtype "float"    Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"    Tips:    - You will use None because it let's us be flexible on the number of examples you will for the placeholders.      In fact, the number of examples during test/train is different.    """    ### START CODE HERE ### (approx. 2 lines)    X = tf.placeholder(tf.float32, shape=(n_x, None), name = "X")    Y = tf.placeholder(tf.float32, shape=(n_y, None), name = "Y")    ### END CODE HERE ###    return X, Y

测试运行:

X, Y = create_placeholders(12288, 6)print ("X = " + str(X))print ("Y = " + str(Y))

测试结果:

X = Tensor("X_4:0", shape=(12288, ?), dtype=float32)Y = Tensor("Y_1:0", shape=(6, ?), dtype=float32)

2-3 参数初始化

对于权重矩阵w,采用Xavier初始化,b则直接0初始化。

W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())

完整代码实现:

# GRADED FUNCTION: initialize_parametersdef initialize_parameters():    """    Initializes parameters to build a neural network with tensorflow. The shapes are:                        W1 : [25, 12288]                        b1 : [25, 1]                        W2 : [12, 25]                        b2 : [12, 1]                        W3 : [6, 12]                        b3 : [6, 1]    Returns:    parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3    """    tf.set_random_seed(1)                   # so that your "random" numbers match ours    ### START CODE HERE ### (approx. 6 lines of code)    W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))    b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())    W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))    b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())    W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))    b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())    ### END CODE HERE ###    parameters = {"W1": W1,                  "b1": b1,                  "W2": W2,                  "b2": b2,                  "W3": W3,                  "b3": b3}    return parameters

测试代码运行:

tf.reset_default_graph()with tf.Session() as sess:    parameters = initialize_parameters()    print("W1 = " + str(parameters["W1"]))    print("b1 = " + str(parameters["b1"]))    print("W2 = " + str(parameters["W2"]))    print("b2 = " + str(parameters["b2"]))

运行结果如下:

W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref>b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref>W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref>b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>

这是由于参数尚未被执行。

2-4 TensorFlow中的前向传播

前向传播的计算止于z3,这是由于在tensorflow 中最后的线性层输出是作为计算代价函数的输入,所以,可以不用计算a3的值。

# GRADED FUNCTION: forward_propagationdef forward_propagation(X, parameters):    """    Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX    Arguments:    X -- input dataset placeholder, of shape (input size, number of examples)    parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"                  the shapes are given in initialize_parameters    Returns:    Z3 -- the output of the last LINEAR unit    """    # Retrieve the parameters from the dictionary "parameters"     W1 = parameters['W1']    b1 = parameters['b1']    W2 = parameters['W2']    b2 = parameters['b2']    W3 = parameters['W3']    b3 = parameters['b3']    ### START CODE HERE ### (approx. 5 lines)              # Numpy Equivalents:    Z1 = tf.add(tf.matmul(W1,X), b1)            #等价于Z1 = np.dot(W1, X) + b1    A1 = tf.nn.relu(Z1) # A1 = relu(Z1),relu函数的调用    Z2 = tf.add(tf.matmul(W2,A1), b2)             #等价于Z2 = np.dot(W2, a1) + b2    A2 = tf.nn.relu(Z2)    #等价于A2 = relu(Z2)    Z3 = tf.add(tf.matmul(W3,A2), b3)      #等价于Z3 = np.dot(W3,Z2) + b3    ### END CODE HERE ###    return Z3

测试运行:

tf.reset_default_graph()with tf.Session() as sess:    X, Y = create_placeholders(12288, 6)    parameters = initialize_parameters()    Z3 = forward_propagation(X, parameters)    print("Z3 = " + str(Z3))

运行结果如下:

Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)

至此,前向传播forward propagation还是没有输出。

2-5 计算代价:

代价计算如下:

tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))

tf.nn.softmax_cross_entropy_with_logits的参数 “logits” 和”labels” 尺寸=(number of examples, num_classes)。所以需要对Z3和Y做转置处理。

代码:

# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y):    """    Computes the cost    Arguments:    Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)    Y -- "true" labels vector placeholder, same shape as Z3    Returns:    cost - Tensor of the cost function    """    # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)    logits = tf.transpose(Z3)#做转置操作    labels = tf.transpose(Y)    ### START CODE HERE ### (1 line of code)    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))    ### END CODE HERE ###    return cost

代码测试:

tf.reset_default_graph()with tf.Session() as sess:    X, Y = create_placeholders(12288, 6)    parameters = initialize_parameters()    Z3 = forward_propagation(X, parameters)    cost = compute_cost(Z3, Y)    print("cost = " + str(cost))

测试结果:

cost = Tensor("Mean:0", shape=(), dtype=float32)

此时依然是没有直接的结果产出的。

2-6 Backward propagation和参数更新

在计算完代价函数之后,我们创建一个 “optimizer” 对象。在tf.session运行的时候,我们需要调用该对象,使得”optimizer” 对象在选定的优化算法和学习率的基础上进行优化操作。例如,梯度下降算法优化器(gradient descent the optimizer):

optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)

优化操作:

_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})

2-7 模型创建

基于之前设计好的各个模块函数进行模型创建:

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,          num_epochs = 1500, minibatch_size = 32, print_cost = True):    """    Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.    Arguments:    X_train -- training set, of shape (input size = 12288, number of training examples = 1080)    Y_train -- test set, of shape (output size = 6, number of training examples = 1080)    X_test -- training set, of shape (input size = 12288, number of training examples = 120)    Y_test -- test set, of shape (output size = 6, number of test examples = 120)    learning_rate -- learning rate of the optimization    num_epochs -- number of epochs of the optimization loop    minibatch_size -- size of a minibatch    print_cost -- True to print the cost every 100 epochs    Returns:    parameters -- parameters learnt by the model. They can then be used to predict.    """    ops.reset_default_graph()                         # to be able to rerun the model without overwriting tf variables    tf.set_random_seed(1)                             # to keep consistent results    seed = 3                                          # to keep consistent results    (n_x, m) = X_train.shape                          # (n_x: input size, m : number of examples in the train set)    n_y = Y_train.shape[0]                            # n_y : output size    costs = []                                        # To keep track of the cost    # Create Placeholders of shape (n_x, n_y)    ### START CODE HERE ### (1 line)    X, Y = create_placeholders(n_x, n_y)    ### END CODE HERE ###    # Initialize parameters    ### START CODE HERE ### (1 line)    parameters = initialize_parameters()    ### END CODE HERE ###    # Forward propagation: Build the forward propagation in the tensorflow graph    ### START CODE HERE ### (1 line)    Z3 = forward_propagation(X, parameters)    ### END CODE HERE ###    # Cost function: Add cost function to tensorflow graph    ### START CODE HERE ### (1 line)    cost = compute_cost(Z3, Y)    ### END CODE HERE ###    # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.    ### START CODE HERE ### (1 line)    optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)    ### END CODE HERE ###    # Initialize all the variables    init = tf.global_variables_initializer()    # Start the session to compute the tensorflow graph    with tf.Session() as sess:        # Run the initialization        sess.run(init)        # Do the training loop        for epoch in range(num_epochs):            epoch_cost = 0.                       # Defines a cost related to an epoch            num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set            seed = seed + 1            minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)            for minibatch in minibatches:                # Select a minibatch                (minibatch_X, minibatch_Y) = minibatch                # IMPORTANT: The line that runs the graph on a minibatch.                # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).                ### START CODE HERE ### (1 line)                _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})                ### END CODE HERE ###                epoch_cost += minibatch_cost / num_minibatches            # Print the cost every epoch            if print_cost == True and epoch % 100 == 0:                print ("Cost after epoch %i: %f" % (epoch, epoch_cost))            if print_cost == True and epoch % 5 == 0:                costs.append(epoch_cost)        # plot the cost        plt.plot(np.squeeze(costs))        plt.ylabel('cost')        plt.xlabel('iterations (per tens)')        plt.title("Learning rate =" + str(learning_rate))        plt.show()        # lets save the parameters in a variable        parameters = sess.run(parameters)        print ("Parameters have been trained!")        # Calculate the correct predictions        correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))        # Calculate accuracy on the test set        accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))        print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))        print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))        return parameters

模型运行:

parameters = model(X_train, Y_train, X_test, Y_test)

运行结果如下:

Cost after epoch 0: 1.855702Cost after epoch 100: 1.016458Cost after epoch 200: 0.733102Cost after epoch 300: 0.572939Cost after epoch 400: 0.468774Cost after epoch 500: 0.381021Cost after epoch 600: 0.313827Cost after epoch 700: 0.254280Cost after epoch 800: 0.203799Cost after epoch 900: 0.166512Cost after epoch 1000: 0.140937Cost after epoch 1100: 0.107750Cost after epoch 1200: 0.086299Cost after epoch 1300: 0.060949Cost after epoch 1400: 0.050934

这里写图片描述

Parameters have been trained!Train Accuracy: 0.999074Test Accuracy: 0.725

可以看出,在测试集上有72.5%的准确率。
在训练集上有较大准确率,但是训练集和测试集的准确率差距较大,存在过拟合现象。我们可以引入L2正则或者dropout正则减小过拟合。

2-8 自己数据集上的测试:

import scipyfrom PIL import Imagefrom scipy import ndimage## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg"## END CODE HERE ### We preprocess your image to fit your algorithm.fname = "images/" + my_imageimage = np.array(ndimage.imread(fname, flatten=False))my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).Tmy_image_prediction = predict(my_image, parameters)plt.imshow(image)print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))

运行结果如下:

Your algorithm predicts: y = 3

这里写图片描述

可以看出结果是分错的,这是由于测试数据集中没有这种竖起大拇指的图片,训练出来的模型对此是不知所措的。这是所谓的”mismatched data distribution” ,这种情况在Structuring Machine Learning Projects这门课程中有介绍。

阅读全文
0 0