【深度学习】Tensorflow学习笔记--MNIST

来源:互联网 发布:小米note软件搬家 编辑:程序博客网 时间:2024/05/21 10:41

参考文档:

Tensorflow中文社区:
http://www.tensorfly.cn/tfdoc/tutorials/mnist_beginners.html

MNIST

MNIST是一个入门级CV数据集,包含手写数字图片。其中60000张用来训练,10000张用来测试。
一个MINST数据单元包含一张手写数字图片(28*28)以及一个图片所对应的label;
将每张28*28的图片展开成向量,长度为784。那么mnist.train.images就是一个[60000,784]的张量;
mnist.train.labels是一个[60000,10]的矩阵。

构建简单softmax回归模型实现预测

代码

# -*- coding:utf-8 -*-import tensorflow as tf import input_datamnist = input_data.read_data_sets("MNIST_data/", one_hot=True)x = tf.placeholder("float",[None,784]) #x是一个占位符,None表示张量的第一个维度可以是任意长度的W = tf.Variable(tf.zeros([784,10])) #随意设置w和b的值,因为后续要学习b = tf.Variable(tf.zeros([10]))#W的维度是[784,10],因为我们想要用784维的图片向量乘以它以得到一个10维的证据值向量,每一位对应不同数字类。b的形状是[10],所以我们可以直接把它加到输出上面。y = tf.nn.softmax(tf.matmul(x,W)+b) #预测值y_ = tf.placeholder("float",[None,10]) #正确值cross_entropy = -tf.reduce_sum(y_ * tf.log(y))'''首先,用 tf.log 计算 y 的每个元素的对数。接下来,我们把 y_ 的每一个元素和 tf.log(y_) 的对应元素相乘。最后,用 tf.reduce_sum 计算张量的所有元素的总和。(注意,这里的交叉熵不仅仅用来衡量单一的一对预测和真实值,而是所有100幅图片的交叉熵的总和'''train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)#要求TensorFlow用梯度下降算法(gradient descent algorithm)以0.01的学习速率最小化交叉熵init = tf.initialize_all_variables()sess = tf.Session()sess.run(init)#开始训练模型,这里我们让模型循环训练1000次for i in range(1000):    #该循环的每个步骤中,我们都会随机抓取训练数据中的100个批处理数据点,    #   然后我们用这些数据点作为参数替换之前的占位符来运行train_step    batch_xs, batch_ys = mnist.train.next_batch(100)    sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})#评估模型性能'''首先找出那些预测正确的标签。tf.argmax 是一个非常有用的函数,它能给出某个tensor对象在某一维上的其数据最大值所在的索引值。由于标签向量是由0,1组成,因此最大值1所在的索引位置就是类别标签,比如tf.argmax(y,1)返回的是模型对于任一输入x预测到的标签值,而 tf.argmax(y_,1) 代表正确的标签,可以用 tf.equal 来检测我们的预测是否真实标签匹配(索引位置一样表示匹配)。'''correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))#返回一组布尔值。为了确定正确预测项的比例,可以把布尔值转换成浮点数,然后取平均值。#例如,[True, False, True, True] 会变成 [1,0,1,1] ,取平均值后得到 0.75.accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))print sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})

测试准确率:0.9133

构建一个多层卷积网络实现预测

实现代码

# -*- coding:utf-8 -*-import tensorflow as tf import input_datamnist = input_data.read_data_sets("MNIST_data/", one_hot=True)sess = tf.InteractiveSession()x = tf.placeholder("float",[None,784]) #x是一个占位符,None表示张量的第一个维度可以是任意长度的#W的维度是[784,10],因为我们想要用784维的图片向量乘以它以得到一个10维的证据值向量,每一位对应不同数字类。b的形状是[10],所以我们可以直接把它加到输出上面。y_ = tf.placeholder("float",[None,10]) #正确值#创建权重生成函数def weight_variable(shape):  initial = tf.truncated_normal(shape, stddev=0.1)  return tf.Variable(initial)#bias生成函数def bias_variable(shape):  initial = tf.constant(0.1, shape=shape)  return tf.Variable(initial)#卷积使用1步长stride size,0边距padding size的模板def conv2d(x, W):  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')#池化利用max poolingdef max_pool_2x2(x):  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],                        strides=[1, 2, 2, 1], padding='SAME')#第一层:卷积 + max pooling#卷积的权重张量形状是[5, 5, 1, 32],前两个维度是patch的大小,接着是输入的通道数目,最后是输出的通道数目W_conv1 = weight_variable([5, 5, 1, 32])b_conv1 = bias_variable([32])x_image = tf.reshape(x, [-1,28,28,1])#为了用这一层,把x变成一个4d向量,其第2、第3维对应图片的宽、高,最后一维代表图片的颜色通道数(因为是灰度图所以这里的通道数为1,如果是rgb彩色图,则为3)#把x_image和权值向量进行卷积,加上偏置项,然后应用ReLU激活函数,最后进行max pooling。h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)h_pool1 = max_pool_2x2(h_conv1)#第二层#为了构建一个更深的网络,我们会把几个类似的层堆叠起来。第二层中,每个5x5的patch会得到64个特征W_conv2 = weight_variable([5, 5, 32, 64])b_conv2 = bias_variable([64])h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)h_pool2 = max_pool_2x2(h_conv2)#密集连接层(全连接)#图片尺寸减小到7x7,我们加入一个有1024个神经元的全连接层,用于处理整个图片。我们把池化层输出的张量reshape成一些向量,乘上权重矩阵,加上偏置,然后对其使用ReLU。W_fc1 = weight_variable([7 * 7 * 64, 1024])b_fc1 = bias_variable([1024])h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)'''为了减少过拟合,在输出层之前加入dropout。我们用一个placeholder来代表一个神经元的输出在dropout中保持不变的概率。这样我们可以在训练过程中启用dropout,在测试过程中关闭dropout。TensorFlow的tf.nn.dropout操作除了可以屏蔽神经元的输出外,还会自动处理神经元输出值的scale。所以用dropout的时候可以不用考虑scale。'''keep_prob = tf.placeholder("float")h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)#输出层:最后添加一个softmax层W_fc2 = weight_variable([1024, 10])b_fc2 = bias_variable([10])y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)#用更加复杂的ADAM优化器来做梯度最速下降,在feed_dict中加入额外的参数keep_prob来控制dropout比例。然后每100次迭代输出一次日志cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))sess.run(tf.initialize_all_variables())for i in range(20000):  batch = mnist.train.next_batch(50)  if i%100 == 0:    train_accuracy = accuracy.eval(feed_dict={        x:batch[0], y_: batch[1], keep_prob: 1.0})    print "step %d, training accuracy %g"%(i, train_accuracy)  train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})print "test accuracy %g"%accuracy.eval(feed_dict={    x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})

测试输出:

step 0, training accuracy 0.06step 100, training accuracy 0.78step 200, training accuracy 0.88step 300, training accuracy 0.9step 400, training accuracy 0.96step 500, training accuracy 0.92step 600, training accuracy 1step 700, training accuracy 0.96step 800, training accuracy 0.9step 900, training accuracy 1step 1000, training accuracy 0.96step 1100, training accuracy 0.96step 1200, training accuracy 0.96step 1300, training accuracy 0.98step 1400, training accuracy 0.94step 1500, training accuracy 0.98step 1600, training accuracy 0.98step 1700, training accuracy 1step 1800, training accuracy 0.98step 1900, training accuracy 0.96step 2000, training accuracy 0.96step 2100, training accuracy 1step 2200, training accuracy 0.94step 2300, training accuracy 1step 2400, training accuracy 1step 2500, training accuracy 0.96step 2600, training accuracy 1step 2700, training accuracy 0.96step 2800, training accuracy 0.98step 2900, training accuracy 0.98step 3000, training accuracy 1step 3100, training accuracy 0.98step 3200, training accuracy 0.98step 3300, training accuracy 0.98step 3400, training accuracy 0.98step 3500, training accuracy 1step 3600, training accuracy 0.98step 3700, training accuracy 0.96step 3800, training accuracy 0.98step 3900, training accuracy 0.98step 4000, training accuracy 1step 4100, training accuracy 1step 4200, training accuracy 1step 4300, training accuracy 0.98step 4400, training accuracy 0.98step 4500, training accuracy 1step 4600, training accuracy 0.98step 4700, training accuracy 1step 4800, training accuracy 0.98step 4900, training accuracy 1step 5000, training accuracy 1step 5100, training accuracy 0.98step 5200, training accuracy 1step 5300, training accuracy 1step 5400, training accuracy 0.98step 5500, training accuracy 1step 5600, training accuracy 1step 5700, training accuracy 1step 5800, training accuracy 0.98step 5900, training accuracy 1step 6000, training accuracy 1step 6100, training accuracy 1step 6200, training accuracy 1step 6300, training accuracy 0.98step 6400, training accuracy 1step 6500, training accuracy 1step 6600, training accuracy 1step 6700, training accuracy 1step 6800, training accuracy 1step 6900, training accuracy 1step 7000, training accuracy 1step 7100, training accuracy 1step 7200, training accuracy 1step 7300, training accuracy 1step 7400, training accuracy 0.98step 7500, training accuracy 1step 7600, training accuracy 0.98step 7700, training accuracy 1step 7800, training accuracy 0.98step 7900, training accuracy 1step 8000, training accuracy 1step 8100, training accuracy 1step 8200, training accuracy 1step 8300, training accuracy 1step 8400, training accuracy 0.98step 8500, training accuracy 1step 8600, training accuracy 0.98step 8700, training accuracy 1step 8800, training accuracy 1step 8900, training accuracy 1step 9000, training accuracy 1step 9100, training accuracy 1step 9200, training accuracy 0.98step 9300, training accuracy 1step 9400, training accuracy 1step 9500, training accuracy 1step 9600, training accuracy 1step 9700, training accuracy 1step 9800, training accuracy 1step 9900, training accuracy 1step 10000, training accuracy 1step 10100, training accuracy 1step 10200, training accuracy 1step 10300, training accuracy 1step 10400, training accuracy 1step 10500, training accuracy 1step 10600, training accuracy 1step 10700, training accuracy 1step 10800, training accuracy 1step 10900, training accuracy 1step 11000, training accuracy 1step 11100, training accuracy 0.98step 11200, training accuracy 1step 11300, training accuracy 1step 11400, training accuracy 1step 11500, training accuracy 1step 11600, training accuracy 1step 11700, training accuracy 0.98step 11800, training accuracy 1step 11900, training accuracy 1step 12000, training accuracy 1step 12100, training accuracy 1step 12200, training accuracy 1step 12300, training accuracy 1step 12400, training accuracy 1step 12500, training accuracy 1step 12600, training accuracy 1step 12700, training accuracy 0.98step 12800, training accuracy 1step 12900, training accuracy 0.98step 13000, training accuracy 1step 13100, training accuracy 1step 13200, training accuracy 1step 13300, training accuracy 1step 13400, training accuracy 1step 13500, training accuracy 1step 13600, training accuracy 1step 13700, training accuracy 1step 13800, training accuracy 1step 13900, training accuracy 1step 14000, training accuracy 1step 14100, training accuracy 1step 14200, training accuracy 1step 14300, training accuracy 1step 14400, training accuracy 1step 14500, training accuracy 1step 14600, training accuracy 1step 14700, training accuracy 1step 14800, training accuracy 1step 14900, training accuracy 1step 15000, training accuracy 1step 15100, training accuracy 1step 15200, training accuracy 0.98step 15300, training accuracy 1step 15400, training accuracy 1step 15500, training accuracy 1step 15600, training accuracy 1step 15700, training accuracy 0.98step 15800, training accuracy 1step 15900, training accuracy 1step 16000, training accuracy 1step 16100, training accuracy 1step 16200, training accuracy 1step 16300, training accuracy 1step 16400, training accuracy 0.98step 16500, training accuracy 1step 16600, training accuracy 1step 16700, training accuracy 1step 16800, training accuracy 1step 16900, training accuracy 1step 17000, training accuracy 1step 17100, training accuracy 1step 17200, training accuracy 1step 17300, training accuracy 1step 17400, training accuracy 1step 17500, training accuracy 1step 17600, training accuracy 1step 17700, training accuracy 1step 17800, training accuracy 1step 17900, training accuracy 1step 18000, training accuracy 1step 18100, training accuracy 1step 18200, training accuracy 1step 18300, training accuracy 1step 18400, training accuracy 1step 18500, training accuracy 1step 18600, training accuracy 1step 18700, training accuracy 1step 18800, training accuracy 1step 18900, training accuracy 1step 19000, training accuracy 1step 19100, training accuracy 1step 19200, training accuracy 1step 19300, training accuracy 1step 19400, training accuracy 1step 19500, training accuracy 1step 19600, training accuracy 1step 19700, training accuracy 1step 19800, training accuracy 1step 19900, training accuracy 1test accuracy 0.9924
原创粉丝点击