TensorFlow学习笔记4:logistic回归

来源:互联网 发布:手机淘宝怎样退出登录 编辑:程序博客网 时间:2024/06/04 18:44

代码来源: https://github.com/aymericdamien/TensorFlow-Examples/

'''A logistic regression learning algorithm example using TensorFlow library.This example is using the MNIST database of handwritten digits(http://yann.lecun.com/exdb/mnist/)Author: Aymeric DamienProject: https://github.com/aymericdamien/TensorFlow-Examples/'''from __future__ import print_functionimport tensorflow as tf# Import MNIST datafrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("/tmp/data/", one_hot=True)# Parameterslearning_rate = 0.01training_epochs = 25batch_size = 100display_step = 1# tf Graph Inputx = tf.placeholder(tf.float32, [None, 784]) # mnist data image of shape 28*28=784y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes# Set model weightsW = tf.Variable(tf.zeros([784, 10]))b = tf.Variable(tf.zeros([10]))# Construct modelpred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax# Minimize error using cross entropycost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))# Gradient Descentoptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)# Initialize the variables (i.e. assign their default value)init = tf.global_variables_initializer()# Start trainingwith tf.Session() as sess:    # Run the initializer    sess.run(init)    # Training cycle    for epoch in range(training_epochs):        avg_cost = 0.        total_batch = int(mnist.train.num_examples/batch_size)        # Loop over all batches        for i in range(total_batch):            batch_xs, batch_ys = mnist.train.next_batch(batch_size)            # Run optimization op (backprop) and cost op (to get loss value)            _, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,                                                          y: batch_ys})            # Compute average loss            avg_cost += c / total_batch        # Display logs per epoch step        if (epoch+1) % display_step == 0:            print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))    print("Optimization Finished!")    # Test model    correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))    # Calculate accuracy    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))    print("Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels}))
TensorFlow中有API建立softmax模型,在这里用于MNIST分类,

用 tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) 可以通过取均值来降维,这里得到了损失函数。

对损失函数进行梯度下降优化。

在测试中, tf.equal(x, y, name=None) 返回一个bool张量。

tf.argmax(input, dimension, name=None) 返回input在某一维上的最大值的index,即返回一个int64张量。

tf.cast(x, dtype, name=None) 对张量进行类型转化,如代码中将bool类型转换为float32类型。




原创粉丝点击