Tensorflow简介
来源:互联网 发布:mac怎么切换输入法 编辑:程序博客网 时间:2024/06/05 09:44
- Tensorflow解析
- 系统角度
- 实现角度
- Computational Graph Architecture
- 运用角度
Tensorflow解析
系统角度
实现角度
Computational Graph Architecture
运用角度
#!/usr/bin/env python# -*- coding: utf-8 -*-""" A one-hidden-layer-MLP MNIST-classifier. """from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_function# Import the training data (MNIST)from tensorflow.examples.tutorials.mnist import input_dataimport tensorflow as tf# Possibly download and extract the MNIST data set.# Retrieve the labels as one-hot-encoded vectors.mnist = input_data.read_data_sets("/tmp/mnist", one_hot=True)# Create a new graphgraph = tf.Graph()# Set our graph as the one to add nodes towith graph.as_default(): # Placeholder for input examples (None = variable dimension) examples = tf.placeholder(shape=[None, 784], dtype=tf.float32) # Placeholder for labels labels = tf.placeholder(shape=[None, 10], dtype=tf.float32) weights =tf.Variable(tf.truncated_normal(shape=[784,10], stddev=0.1)) bias = tf.Variable(tf.constant(0.1, shape=[10])) # Apply an affine transformation to the input features logits = tf.matmul(examples, weights) + bias estimates = tf.nn.softmax(logits) # Compute the cross-entropy cross_entropy = -tf.reduce_sum(labels * tf.log(estimates), reduction_indices=[1]) # And finally the loss loss = tf.reduce_mean(cross_entropy) # Create a gradient-descent optimizer that minimizes the loss. # We choose a learning rate of 0.01 optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Find the indices where the predictions were correct correct_predictions = tf.equal(tf.argmax(estimates, dimension=1), tf.argmax(labels, dimension=1)) accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32))with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() for step in range(1001): example_batch, label_batch = mnist.train.next_batch(100) feed_dict = {examples: example_batch, labels: label_batch} if step % 100 == 0: _, loss_value, accuracy_value = session.run( [optimizer, loss, accuracy], feed_dict=feed_dict) print("Loss at time {0}: {1}".format(step, loss_value)) print("Accuracy at time {0}: {1}".format(step, accuracy_value)) else: optimizer.run(feed_dict)
阅读全文
0 0
- TensorFlow简介
- TensorFlow 简介
- TensorFlow简介
- tensorflow简介
- TensorFlow简介
- Tensorflow简介
- Tensorflow简介
- Tensorflow-简介
- Tensorflow简介
- Tensorflow简介
- TensorFlow简介
- TensorFlow教程 1 Tensorflow简介
- Tensorflow 分布式部署简介
- 人工智能----TensorFlow开篇简介
- 深度学习---tensorflow简介
- Tensorflow 分布式部署简介
- Tensorflow 分布式部署简介
- tensorflow 入门简介
- SpringCloud学习资料
- ssss
- UE4蓝图节点目录翻译【目录】--- Actor
- Android:dagger2让你爱不释手-基础依赖注入框架篇
- Keil uvision 5.0 安装及破解(附软件及注册机下载地址)
- Tensorflow简介
- C#tablecontrol隐藏标题2种方法
- 后端服务性能压测实践
- Centos7.3开机自动启动或执行指定命令
- 算法系列(11)LeetCode136 Single Number
- Cookie&Session机制详解
- 微信小程序开发入门教程
- Qt/Qml工程转VS工程
- hdu1166 基础的线段树/树状数组问题