深度学习DeepLearning.ai系列课程学习总结:14. Tensorflow入门

来源:互联网 发布:三星手机3g网络设置 编辑:程序博客网 时间:2024/06/10 05:01

转载过程中,图片丢失,代码显示错乱。

为了更好的学习内容,请访问原创版本:

http://www.missshi.cn/api/view/blog/59bbcb46e519f50d04000206

Ps:初次访问由于js文件较大,请耐心等候(8s左右)

 

在之前的内容中,我们始终都是在使用numpy来实现神经网络。

然而,对于大型的神经网络模型而言,这样是非常耗时的。

幸运的是现在有很多成熟的深度学习框架可以给我们提供帮助,本文要讲解的就是Google推出的Tensorflow框架。

 

在使用Tensorflow框架时,通常的步骤如下:

1. 初始化变量

2. 启动一个Session

3. 训练算法

4. 完成神经网络

 

Tensorflow库

首先,让我们先了解一些Tensorflow的库函数:

  1. import math
  2. import numpy as np
  3. import h5py
  4. import matplotlib.pyplot as plt
  5. import tensorflow as tf
  6. from tensorflow.python.framework import ops
  7. from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
  8.  
  9. %matplotlib inline
  10. np.random.seed(1)

其中,一些相关函数如下:

  1. def load_dataset():
  2.     train_dataset = h5py.File('datasets/train_signs.h5', "r")
  3.     train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
  4.     train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels
  5.  
  6.     test_dataset = h5py.File('datasets/test_signs.h5', "r")
  7.     test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
  8.     test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels
  9.  
  10.     classes = np.array(test_dataset["list_classes"][:]) # the list of classes
  11.     
  12.     train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
  13.     test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
  14.     
  15.     return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes
  16.  
  17. def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
  18.     """
  19.     Creates a list of random minibatches from (X, Y)
  20.     
  21.     Arguments:
  22.     X -- input data, of shape (input size, number of examples)
  23.     Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
  24.     mini_batch_size - size of the mini-batches, integer
  25.     seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.
  26.     
  27.     Returns:
  28.     mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
  29.     """
  30.     
  31.     m = X.shape[1]                  # number of training examples
  32.     mini_batches = []
  33.     np.random.seed(seed)
  34.     
  35.     # Step 1: Shuffle (X, Y)
  36.     permutation = list(np.random.permutation(m))
  37.     shuffled_X = X[:, permutation]
  38.     shuffled_Y = Y[:, permutation].reshape((Y.shape[0],m))
  39.  
  40.     # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
  41.     num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
  42.     for k in range(0, num_complete_minibatches):
  43.         mini_batch_X = shuffled_X[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
  44.         mini_batch_Y = shuffled_Y[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
  45.         mini_batch = (mini_batch_X, mini_batch_Y)
  46.         mini_batches.append(mini_batch)
  47.     
  48.     # Handling the end case (last mini-batch < mini_batch_size)
  49.     if m % mini_batch_size != 0:
  50.         mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
  51.         mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
  52.         mini_batch = (mini_batch_X, mini_batch_Y)
  53.         mini_batches.append(mini_batch)
  54.     
  55.     return mini_batches
  56.  
  57. def convert_to_one_hot(Y, C):
  58.     Y = np.eye(C)[Y.reshape(-1)].T
  59.     return Y
  60.  
  61. def predict(X, parameters):
  62.     
  63.     W1 = tf.convert_to_tensor(parameters["W1"])
  64.     b1 = tf.convert_to_tensor(parameters["b1"])
  65.     W2 = tf.convert_to_tensor(parameters["W2"])
  66.     b2 = tf.convert_to_tensor(parameters["b2"])
  67.     W3 = tf.convert_to_tensor(parameters["W3"])
  68.     b3 = tf.convert_to_tensor(parameters["b3"])
  69.     
  70.     params = {"W1": W1,
  71.               "b1": b1,
  72.               "W2": W2,
  73.               "b2": b2,
  74.               "W3": W3,
  75.               "b3": b3}
  76.     
  77.     x = tf.placeholder("float", [12288, 1])
  78.     
  79.     z3 = forward_propagation_for_predict(x, params)
  80.     p = tf.argmax(z3)
  81.     
  82.     sess = tf.Session()
  83.     prediction = sess.run(p, feed_dict = {x: X})
  84.         
  85.     return prediction

Ps:为了给大家提供更好的学习效果,我们提供了原始数据集train_signs.h5。

请访问http://www.missshi.cn/#/books搜索train_signs.h5进行下载,首次访问Js可能加载微慢,请耐心等候(约10s)。

如果感觉不错希望大家推广下网站哈!不建议大家把训练集直接在QQ群或CSDN上直接分享。


现在,我们已经引入了我们需要的库函数。

接下来,我们首先来计算一下训练样本的误差:

  1. y_hat = tf.constant(36, name='y_hat')            # Define y_hat constant. Set to 36.
  2. = tf.constant(39, name='y')                    # Define y. Set to 39
  3.  
  4. loss = tf.Variable((- y_hat)**2, name='loss')  # Create a variable for the loss
  5.  
  6. init = tf.global_variables_initializer()         # When init is run later (session.run(init)),
  7.                                                  # the loss variable will be initialized and ready to be computed
  8. with tf.Session() as session:                    # Create a session and print the output
  9.     session.run(init)                            # Initializes the variables
  10.     print(session.run(loss))                     # Prints the loss
  11.     # 9

对于Tensorflow的代码实现而言,实现代码的结构如下:

1. 创建Tensorflow变量(此时,尚未直接计算)

2. 实现Tensorflow变量之间的操作定义

3. 初始化Tensorflow变量

4. 创建Session

5. 运行Session,此时,之前编写操作都会在这一步运行。


下面,让我们通过更多的示例来了解这个概念:

  1. = tf.constant(2)
  2. = tf.constant(10)
  3. = tf.multiply(a,b)
  4. print(c) 
  5. # Tensor("Mul:0", shape=(), dtype=int32)

正如我们之前所讲的,在定义变量的部分,计算不会直接进行,因此,c并不是20,而是一个int32型变量。

  1. sess = tf.Session()
  2. print(sess.run(c))
  3. # 20

接下来,我们来继续学习placeholder。

placeholder是一个占位变量,表示在运行过程中才会给这个变量赋值。

  1. = tf.placeholder(tf.int64, name = 'x')
  2. print(sess.run(2 * x, feed_dict = {x: 3}))
  3. # 6
  4. sess.close()


线性函数

接下来,我们需要用Tensorflow来实现神经网络中最常用的函数之一:线性函数。

  1. def linear_function():
  2.     """
  3.     Implements a linear function: 
  4.             Initializes W to be a random tensor of shape (4,3)
  5.             Initializes X to be a random tensor of shape (3,1)
  6.             Initializes b to be a random tensor of shape (4,1)
  7.     Returns: 
  8.     result -- runs the session for Y = WX + b 
  9.     """
  10.     
  11.     np.random.seed(1)
  12.     
  13.     ### START CODE HERE ### (4 lines of code)
  14.     X = tf.constant(np.random.randn(3,1), name = "X")
  15.     W = tf.constant(np.random.randn(4,3), name = "X")
  16.     b = tf.constant(np.random.randn(4,1), name = "X")
  17.     Y = tf.matmul(W, X) + b
  18.     ### END CODE HERE ### 
  19.     
  20.     # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
  21.     
  22.     ### START CODE HERE ###
  23.     sess = tf.Session()
  24.     result = sess.run(Y)
  25.     ### END CODE HERE ### 
  26.     
  27.     # close the session 
  28.     sess.close()
  29.  
  30.     return result


sigmod函数

  1. def sigmoid(z):
  2.     """
  3.     Computes the sigmoid of z
  4.     
  5.     Arguments:
  6.     z -- input value, scalar or vector
  7.     
  8.     Returns: 
  9.     results -- the sigmoid of z
  10.     """
  11.     
  12.     ### START CODE HERE ### ( approx. 4 lines of code)
  13.     # Create a placeholder for x. Name it 'x'.
  14.     x = tf.placeholder(tf.float32, name = "x")
  15.  
  16.     # compute sigmoid(x)
  17.     sigmoid = tf.sigmoid(x)
  18.  
  19.     # Create a session, and run it. Please use the method 2 explained above. 
  20.     # You should use a feed_dict to pass z's value to x. 
  21.     with tf.Session() as sess:
  22.         # Run session and call the output "result"
  23.         result = sess.run(sigmoid, feed_dict = {x: z})
  24.     
  25.     ### END CODE HERE ###
  26.     
  27.     return result


代价函数计算

其中,代价函数的定义如下:

  1. def cost(logits, labels):
  2.     """
  3.     Computes the cost using the sigmoid cross entropy
  4.     
  5.     Arguments:
  6.     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
  7.     labels -- vector of labels y (1 or 0) 
  8.     
  9.     Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels
  10.     in the TensorFlow documentation. So logits will feed into z, and labels into y. 
  11.     
  12.     Returns:
  13.     cost -- runs the session of the cost (formula (2))
  14.     """
  15.     
  16.     ### START CODE HERE ### 
  17.     
  18.     # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
  19.     z = tf.placeholder(tf.float32, name = "logits")
  20.     y = tf.placeholder(tf.float32, name = "labels")
  21.     
  22.     # Use the loss function (approx. 1 line)
  23.     cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z,  labels = y)
  24.     
  25.     # Create a session (approx. 1 line). See method 1 above.
  26.     sess = tf.Session()
  27.     
  28.     # Run the session (approx. 1 line).
  29.     cost = sess.run(cost, feed_dict = {z: logits, y:labels})
  30.     
  31.     # Close the session (approx. 1 line). See method 1 above.
  32.     sess.close()    
  33.     ### END CODE HERE ###
  34.     
  35.     return cost

看到了嘛?

只用一个函数tf.nn.sigmoid_cross_entropy_with_logits(logits = z,  labels = y),我们就实现了如此复杂的代价函数。

这就是深度学习框架的魅力!


进行0,1编码

通常,对于一个多分类问题,我们得到的标签是一些0到C-1的整数。其中,C是分类数。

然而,在训练之前,我们需要将直接0到C-1的整数转换为一个C维的向量。

  1. def one_hot_matrix(labels, C):
  2. """
  3. Creates a matrix where the i-th row corresponds to the ith class number and the jth column
  4. corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
  5. will be 1.
  6. Arguments:
  7. labels -- vector containing the labels
  8. C -- number of classes, the depth of the one hot dimension
  9. Returns:
  10. one_hot -- one hot matrix
  11. """
  12. ### START CODE HERE ###
  13. # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
  14. C = tf.constant(C, name = "C")
  15. # Use tf.one_hot, be careful with the axis (approx. 1 line)
  16. one_hot_matrix = tf.one_hot(labels, C, 1)
  17. # Create the session (approx. 1 line)
  18. sess = tf.Session()
  19. # Run the session (approx. 1 line)
  20. one_hot = sess.run(one_hot_matrix).T
  21. # Close the session (approx. 1 line). See method 1 above.
  22. sess.close()
  23. ### END CODE HERE ###
  24. return one_hot


全0初始化与全1初始化

  1. def zeros(shape):
  2. """
  3. Creates an array of ones of dimension shape
  4. Arguments:
  5. shape -- shape of the array you want to create
  6. Returns:
  7. ones -- array containing only ones
  8. """
  9. ### START CODE HERE ###
  10. # Create "zeros" tensor using tf.zeros(...). (approx. 1 line)
  11. ones = tf.zeros(shape)
  12. # Create the session (approx. 1 line)
  13. sess = tf.Session()
  14. # Run the session to compute 'zeros' (approx. 1 line)
  15. zeros = sess.run(zeros)
  16. # Close the session (approx. 1 line). See method 1 above.
  17. sess.close()
  18. ### END CODE HERE ###
  19. return zeros
  20.  
  21. def ones(shape):
  22. """
  23. Creates an array of ones of dimension shape
  24. Arguments:
  25. shape -- shape of the array you want to create
  26. Returns:
  27. ones -- array containing only ones
  28. """
  29. ### START CODE HERE ###
  30. # Create "ones" tensor using tf.ones(...). (approx. 1 line)
  31. ones = tf.ones(shape)
  32. # Create the session (approx. 1 line)
  33. sess = tf.Session()
  34. # Run the session to compute 'ones' (approx. 1 line)
  35. ones = sess.run(ones)
  36. # Close the session (approx. 1 line). See method 1 above.
  37. sess.close()
  38. ### END CODE HERE ###
  39. return ones


用Tensorflow搭建一个神经网络模型

用tensorflow搭建神经网络模型时,可以分为两大步骤:

1. 建立计算图

2. 训练运行

问题描述:

我们需要去建立一个神经网络来识别0-5的六个手势。

每张图片的大小都是64*64像素。其中,训练集包含1080张图片。测试集包含120张图片。

  1. # 读取数据集
  2. X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

测试一张图片吧:

  1. # Example of a picture
  2. index = 0
  3. plt.imshow(X_train_orig[index])
  4. print ("y = " + str(np.squeeze(Y_train_orig[:, index])))

接下来,我们需要对读取的数据集进行预处理:

包括归一化和之前提到的零一化。

  1. X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
  2. X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
  3. # Normalize image vectors
  4. X_train = X_train_flatten/255.
  5. X_test = X_test_flatten/255.
  6. # Convert training and test labels to one hot matrices
  7. Y_train = convert_to_one_hot(Y_train_orig, 6)
  8. Y_test = convert_to_one_hot(Y_test_orig, 6)

我们需要建立的模型结构如下:

其中,Softmax层是在多分类问题中最常用的输出层。

接下来,我们需要创建一些placeholder:

  1. def create_placeholders(n_x, n_y):
  2. """
  3. Creates the placeholders for the tensorflow session.
  4. Arguments:
  5. n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
  6. n_y -- scalar, number of classes (from 0 to 5, so -> 6)
  7. Returns:
  8. X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
  9. Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
  10. Tips:
  11. - You will use None because it let's us be flexible on the number of examples you will for the placeholders.
  12. In fact, the number of examples during test/train is different.
  13. """
  14.  
  15. ### START CODE HERE ### (approx. 2 lines)
  16. X = tf.placeholder(tf.float32, [n_x, None], name = "X")
  17. Y = tf.placeholder(tf.float32, [n_y, None], name = "Y")
  18. ### END CODE HERE ###
  19. return X, Y

接下来,我们需要进行参数初始化:

  1. def initialize_parameters():
  2. """
  3. Initializes parameters to build a neural network with tensorflow. The shapes are:
  4. W1 : [25, 12288]
  5. b1 : [25, 1]
  6. W2 : [12, 25]
  7. b2 : [12, 1]
  8. W3 : [6, 12]
  9. b3 : [6, 1]
  10. Returns:
  11. parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
  12. """
  13. tf.set_random_seed(1) # so that your "random" numbers match ours
  14. ### START CODE HERE ### (approx. 6 lines of code)
  15. W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
  16. b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
  17. W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
  18. b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
  19. W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
  20. b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
  21. ### END CODE HERE ###
  22.  
  23. parameters = {"W1": W1,
  24. "b1": b1,
  25. "W2": W2,
  26. "b2": b2,
  27. "W3": W3,
  28. "b3": b3}
  29. return parameters

然后,我们需要实现前向传播计算:

  1. def forward_propagation(X, parameters):
  2. """
  3. Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
  4. Arguments:
  5. X -- input dataset placeholder, of shape (input size, number of examples)
  6. parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
  7. the shapes are given in initialize_parameters
  8.  
  9. Returns:
  10. Z3 -- the output of the last LINEAR unit
  11. """
  12. # Retrieve the parameters from the dictionary "parameters"
  13. W1 = parameters['W1']
  14. b1 = parameters['b1']
  15. W2 = parameters['W2']
  16. b2 = parameters['b2']
  17. W3 = parameters['W3']
  18. b3 = parameters['b3']
  19. ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
  20. Z1 = tf.matmul(W1, X) + b1 # Z1 = np.dot(W1, X) + b1
  21. A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
  22. Z2 = tf.matmul(W2, A1) + b2 # Z2 = np.dot(W2, a1) + b2
  23. A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
  24. Z3 = tf.matmul(W3, A2) + b3 # Z3 = np.dot(W3,Z2) + b3
  25. ### END CODE HERE ###
  26. return Z3

最后,我们需要计算代价函数:

  1. def compute_cost(Z3, Y):
  2. """
  3. Computes the cost
  4. Arguments:
  5. Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
  6. Y -- "true" labels vector placeholder, same shape as Z3
  7. Returns:
  8. cost - Tensor of the cost function
  9. """
  10. # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
  11. logits = tf.transpose(Z3)
  12. labels = tf.transpose(Y)
  13. ### START CODE HERE ### (1 line of code)
  14. cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
  15. ### END CODE HERE ###
  16. return cost

需要说明的是,对于反向传播计算和参数更新这两个步骤,在Tensorflow等框架中,已经自动的根据我们编写的前向传播计算和代价函数自动完成了,无需我们自己编写。

下面,让我们根据刚才实现的一些方法来构建我们的模型吧:

  1. def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
  2. num_epochs = 1500, minibatch_size = 32, print_cost = True):
  3. """
  4. Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
  5. Arguments:
  6. X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
  7. Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
  8. X_test -- training set, of shape (input size = 12288, number of training examples = 120)
  9. Y_test -- test set, of shape (output size = 6, number of test examples = 120)
  10. learning_rate -- learning rate of the optimization
  11. num_epochs -- number of epochs of the optimization loop
  12. minibatch_size -- size of a minibatch
  13. print_cost -- True to print the cost every 100 epochs
  14. Returns:
  15. parameters -- parameters learnt by the model. They can then be used to predict.
  16. """
  17. ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
  18. tf.set_random_seed(1) # to keep consistent results
  19. seed = 3 # to keep consistent results
  20. (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
  21. n_y = Y_train.shape[0] # n_y : output size
  22. costs = [] # To keep track of the cost
  23. # Create Placeholders of shape (n_x, n_y)
  24. ### START CODE HERE ### (1 line)
  25. X, Y = create_placeholders(n_x, n_y)
  26. ### END CODE HERE ###
  27.  
  28. # Initialize parameters
  29. ### START CODE HERE ### (1 line)
  30. parameters = initialize_parameters()
  31. ### END CODE HERE ###
  32. # Forward propagation: Build the forward propagation in the tensorflow graph
  33. ### START CODE HERE ### (1 line)
  34. Z3 = forward_propagation(X, parameters)
  35. ### END CODE HERE ###
  36. # Cost function: Add cost function to tensorflow graph
  37. ### START CODE HERE ### (1 line)
  38. cost = compute_cost(Z3, Y)
  39. ### END CODE HERE ###
  40. # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
  41. ### START CODE HERE ### (1 line)
  42. optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
  43. ### END CODE HERE ###
  44. # Initialize all the variables
  45. init = tf.global_variables_initializer()
  46.  
  47. # Start the session to compute the tensorflow graph
  48. with tf.Session() as sess:
  49. # Run the initialization
  50. sess.run(init)
  51. # Do the training loop
  52. for epoch in range(num_epochs):
  53.  
  54. epoch_cost = 0. # Defines a cost related to an epoch
  55. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
  56. seed = seed + 1
  57. minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
  58.  
  59. for minibatch in minibatches:
  60.  
  61. # Select a minibatch
  62. (minibatch_X, minibatch_Y) = minibatch
  63. # IMPORTANT: The line that runs the graph on a minibatch.
  64. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
  65. ### START CODE HERE ### (1 line)
  66. _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
  67. ### END CODE HERE ###
  68. epoch_cost += minibatch_cost / num_minibatches
  69.  
  70. # Print the cost every epoch
  71. if print_cost == True and epoch % 100 == 0:
  72. print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
  73. if print_cost == True and epoch % 5 == 0:
  74. costs.append(epoch_cost)
  75. # plot the cost
  76. plt.plot(np.squeeze(costs))
  77. plt.ylabel('cost')
  78. plt.xlabel('iterations (per tens)')
  79. plt.title("Learning rate =" + str(learning_rate))
  80. plt.show()
  81.  
  82. # lets save the parameters in a variable
  83. parameters = sess.run(parameters)
  84. print ("Parameters have been trained!")
  85.  
  86. # Calculate the correct predictions
  87. correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
  88.  
  89. # Calculate accuracy on the test set
  90. accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
  91.  
  92. print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
  93. print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
  94. return parameters

用我们的模型来测试一下吧:

  1. parameters = model(X_train, Y_train, X_test, Y_test)

可以看到,经过一段时间的训练后,训练集的精确度为99.9%。而测试集的精确度为71.7%。

出现了一定的过拟合!想想应该怎么处理呢?


用一些其他图片来测试一下吧

除了一些训练集和测试集中的图片,我们还可以使用一些其他的图片来进行测试。

  1. import scipy
  2. from PIL import Image
  3. from scipy import ndimage
  4.  
  5. ## START CODE HERE ## (PUT YOUR IMAGE NAME)
  6. my_image = "thumbs_up.jpg"
  7. ## END CODE HERE ##
  8.  
  9. # We preprocess your image to fit your algorithm.
  10. fname = "images/" + my_image
  11. image = np.array(ndimage.imread(fname, flatten=False))
  12. my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
  13. my_image_prediction = predict(my_image, parameters)
  14.  
  15. plt.imshow(image)
  16. print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))

关于Tensorflow的入门讲解,我们就讲解到这里,后续更多的实践都会通过tensorflow来进行!


 

更多更详细的内容,请访问原创网站:

http://www.missshi.cn/api/view/blog/59bbcb46e519f50d04000206

Ps:初次访问由于js文件较大,请耐心等候(8s左右)

阅读全文
0 0
原创粉丝点击