小白学Tensorflow之Logistic回归

来源:互联网 发布:无主之地2 优化 编辑:程序博客网 时间:2024/04/29 16:34

利用Tensorflow实现Logistic回归
第一,我们先来设计两个函数,使得在后续的程序中不用重复编写相同的代码。

def init_weights(shape):    return tf.Variable(tf.random_normal(shape, stddev = 0.01))def model(X, w):    return tf.matmul(X, w)

第二,我们带入mnist的数据集,具体方法可以参考官网。

# 导入数据mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels

第三,构建损失函数,我们采用softmax和交叉熵来训练模型

# 构建损失函数,我们采用softmax和交叉熵来训练模型cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))learning_rate = 0.01train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

完整代码如下:

#!/usr/bin/env python# -*- coding: utf-8 -*-import numpy as npimport tensorflow as tf import input_datadef init_weights(shape):    return tf.Variable(tf.random_normal(shape, stddev = 0.01))def model(X, w):    return tf.matmul(X, w)# 导入数据mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels# 设置占位符X = tf.placeholder("float", [None, 784])Y = tf.placeholder("float", [None, 10])# 初始化权重w = init_weights([784, 10])# 构建模型py_x = model(X, w)# 构建损失函数,我们采用softmax和交叉熵来训练模型cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(py_x, Y))learning_rate = 0.01train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)predict_op = tf.argmax(py_x, 1)with tf.Session() as sess:    init = tf.initialize_all_variables()    sess.run(init)    for i in xrange(100):        for start, end in zip(range(0, len(trX), 128), range(128, len(trX), 128)):            sess.run(train_op, feed_dict = {X: trX[start:end], Y: trY[start:end]})        print i, np.mean(np.argmax(teY, axis = 1) == sess.run(predict_op, feed_dict = {X: teX, Y: teY}))

简书同步更新:http://www.jianshu.com/p/f51f0ca4278c

0 0
原创粉丝点击