TensorFlow Note 1 : Getting Started With TensorFlow

来源:互联网 发布:在线二次元av淘宝 编辑:程序博客网 时间:2024/06/05 08:41

官方原文,自行翻墙.

使用之前需要先搞清楚的几点:

  • TensorFlow Core 对于Machine learning的researcher 比较好用
  • higher level APIs是基于 TensorFlow Core,更简单
  • tf.contrib.learn 可以帮助你管理 data sets,
  • contrib 正在开发,常变

Tensor

多维数组, rank代表的是维度的数量

3 # a rank 0 tensor; this is a scalar with shape [][1. ,2., 3.] # a rank 1 tensor; this is a vector with shape [3][[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3][[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3] 

TensorFlow Core tutorial

  • Building the computational graph.
  • Running the computational graph.

常量的表示

tf.constant是不能变的

node1 = tf.constant(3.0, dtype=tf.float32)node2 = tf.constant(4.0) # also tf.float32 implicitlyprint(node1, node2)

打印是不能直接打印出值的

Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)

必须用 Session.run()

sess = tf.Session()print(sess.run([node1, node2]))[3.0, 4.0]

Tensor的操作

node3 = tf.add(node1, node2)print("node3: ", node3)print("sess.run(node3): ",sess.run(node3))

操作和常量也是一样的,不Session.run()是没有值的

hdr_strongcontent_copynode3:  Tensor("Add:0", shape=(), dtype=float32)sess.run(node3):  7.0

placeholder

一个图,能够参数化,去接收外部的输出.先占位,稍后传值进来
类似lambda,先定义参数,再操作

a = tf.placeholder(tf.float32)b = tf.placeholder(tf.float32)adder_node = a + b  # + provides a shortcut for tf.add(a, b)

feed_dict,用来传入具体的值

print(sess.run(adder_node, {a: 3, b:4.5}))print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))7.5[ 3.  7.]

Variables

用来定义可训练的参数,即w和b

W = tf.Variable([.3], dtype=tf.float32)b = tf.Variable([-.3], dtype=tf.float32)x = tf.placeholder(tf.float32)linear_model = W * x + b

tf.Variable刚定义是没有初始化的,必须要如下

init = tf.global_variables_initializer()sess.run(init)print(sess.run(linear_model, {x:[1,2,3,4]}))[ 0.          0.30000001  0.60000002  0.90000004]

需要y的placeholder提供labels
需要loss function

y = tf.placeholder(tf.float32)squared_deltas = tf.square(linear_model - y)loss = tf.reduce_sum(squared_deltas)print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))23.66

variable是可以变化的,使用tf.assign

fixW = tf.assign(W, [-1.])fixb = tf.assign(b, [1.])sess.run([fixW, fixb])print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))0.0

tf.train API

tf.gradients是可以直接计算倒数的
optimizer可以直接定义

optimizer = tf.train.GradientDescentOptimizer(0.01)train = optimizer.minimize(loss)sess.run(init) # reset values to incorrect defaults.for i in range(1000):  sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})print(sess.run([W, b]))[array([-0.9999969], dtype=float32), array([ 0.99999082], dtype=float32)]

完整使用

import numpy as npimport tensorflow as tf# Model parametersW = tf.Variable([.3], dtype=tf.float32)b = tf.Variable([-.3], dtype=tf.float32)# Model input and outputx = tf.placeholder(tf.float32)linear_model = W * x + by = tf.placeholder(tf.float32)# lossloss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares# optimizeroptimizer = tf.train.GradientDescentOptimizer(0.01)train = optimizer.minimize(loss)# training datax_train = [1,2,3,4]y_train = [0,-1,-2,-3]# training loopinit = tf.global_variables_initializer()sess = tf.Session()sess.run(init) # reset values to wrongfor i in range(1000):  sess.run(train, {x:x_train, y:y_train})# evaluate training accuracycurr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train})print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11

这里写图片描述

tf.contrib.learn

tf.contrib.learn高等级API
- running training loops
- running evaluation loops
- managing data sets
- managing feeding

基本使用

import tensorflow as tf# NumPy is often used to load, manipulate and preprocess data.import numpy as np# Declare list of features. We only have one real-valued feature. There are many# other types of columns that are more complicated and useful.features = [tf.contrib.layers.real_valued_column("x", dimension=1)]# An estimator is the front end to invoke training (fitting) and evaluation# (inference). There are many predefined types like linear regression,# logistic regression, linear classification, logistic classification, and# many neural network classifiers and regressors. The following code# provides an estimator that does linear regression.estimator = tf.contrib.learn.LinearRegressor(feature_columns=features)# TensorFlow provides many helper methods to read and set up data sets.# Here we use two data sets: one for training and one for evaluation# We have to tell the function how many batches# of data (num_epochs) we want and how big each batch should be.x_train = np.array([1., 2., 3., 4.])y_train = np.array([0., -1., -2., -3.])x_eval = np.array([2., 5., 8., 1.])y_eval = np.array([-1.01, -4.1, -7, 0.])input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x_train}, y_train,                                              batch_size=4,                                              num_epochs=1000)eval_input_fn = tf.contrib.learn.io.numpy_input_fn(    {"x":x_eval}, y_eval, batch_size=4, num_epochs=1000)# We can invoke 1000 training steps by invoking the  method and passing the# training data set.estimator.fit(input_fn=input_fn, steps=1000)# Here we evaluate how well our model did.train_loss = estimator.evaluate(input_fn=input_fn)eval_loss = estimator.evaluate(input_fn=eval_input_fn)print("train loss: %r"% train_loss)print("eval loss: %r"% eval_loss)
    train loss: {'global_step': 1000, 'loss': 4.3049088e-08}    eval loss: {'global_step': 1000, 'loss': 0.0025487561}

自定义模型,用基础API

estimator = tf.contrib.learn.Estimator(model_fn=model)调用后
直接evaluate predictions, training steps, and loss.

import numpy as npimport tensorflow as tf# Declare list of features, we only have one real-valued featuredef model(features, labels, mode):  # Build a linear model and predict values  W = tf.get_variable("W", [1], dtype=tf.float64)  b = tf.get_variable("b", [1], dtype=tf.float64)  y = W*features['x'] + b  # Loss sub-graph  loss = tf.reduce_sum(tf.square(y - labels))  # Training sub-graph  global_step = tf.train.get_global_step()  optimizer = tf.train.GradientDescentOptimizer(0.01)  train = tf.group(optimizer.minimize(loss),                   tf.assign_add(global_step, 1))  # ModelFnOps connects subgraphs we built to the  # appropriate functionality.  return tf.contrib.learn.ModelFnOps(      mode=mode, predictions=y,      loss=loss,      train_op=train)estimator = tf.contrib.learn.Estimator(model_fn=model)# define our data setsx_train = np.array([1., 2., 3., 4.])y_train = np.array([0., -1., -2., -3.])x_eval = np.array([2., 5., 8., 1.])y_eval = np.array([-1.01, -4.1, -7, 0.])input_fn = tf.contrib.learn.io.numpy_input_fn({"x": x_train}, y_train, 4, num_epochs=1000)# trainestimator.fit(input_fn=input_fn, steps=1000)# Here we evaluate how well our model did. train_loss = estimator.evaluate(input_fn=input_fn)eval_loss = estimator.evaluate(input_fn=eval_input_fn)print("train loss: %r"% train_loss)print("eval loss: %r"% eval_loss)
train loss: {'global_step': 1000, 'loss': 4.9380226e-11}eval loss: {'global_step': 1000, 'loss': 0.01010081}
阅读全文
0 0
原创粉丝点击