Tensor
来源:互联网 发布:android7源码 编辑:程序博客网 时间:2024/06/08 08:06
Knowing TensorFlow Core principles will give you a great mental model of how things are working internally when you use the more compact higher level API.
follow this page
tensor是tensorflow的核心数据单元
tensor的rank是其维度
3 # a rank 0 tensor; a scalar with shape [][1., 2., 3.] # a rank 1 tensor; a vector with shape [3][[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3][[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]
The Computational Graph
可以把tensorflow的运作理解为两部分:
- Building the computational graph.
- Running the computational graph.
computational graph是一系列operation产生的图,每个节点得到n个输入并产生输出
import tensorflow as tf
node1 = tf.constant(3.0, dtype=tf.float32)node2 = tf.constant(4.0)print node1, node2
Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)
- 这时候并不会直接把值输出,而是输出在evaluate的时候会产生3.0, 4.0的数值的nodes(节点)信息
- 为了直接evaluate这些节点,需要run整个session
with tf.Session() as sess: print sess.run([node1, node2])
[3.0, 4.0]
from __future__ import print_function
node3 = tf.add(node1, node2)print("node3:", node3)with tf.Session() as sess: print("sess.run(node3):", sess.run(node3))
node3: Tensor("Add:0", shape=(), dtype=float32)sess.run(node3): 7.0
一个图可以被参数化来accept external input
a = tf.placeholder(tf.float32)b = tf.placeholder(tf.float32)adder_node = a + b # + provides a shortcut for tf.add(a, b)
with tf.Session() as sess: print(sess.run(adder_node, {a : 3, b : 4.5})) print(sess.run(adder_node, {a : [1, 2, 3], b : [2, 3, 4]}))
7.5[ 3. 5. 7.]
mul_node = adder_node * 3.with tf.Session() as sess: print(sess.run(mul_node, {a : 3, b : 4.5}))
22.5
使用Variables来允许任意的输入,添加trainable的参数
W = tf.Variable([.3], dtype=tf.float32)b = tf.Variable([-.3], dtype=tf.float32)x = tf.placeholder(tf.float32)linear_model = W * x + b
Variables的初始化不像constant,需要如下初始化:
init = tf.global_variables_initializer()sess = tf.Session()sess.run(init)
print(sess.run(linear_model, {x : [1, 2, 3, 4]}))
[ 0. 0.30000001 0.60000002 0.90000004]
ok, 这时候我们创建了一个模型,是时候来评估这个模型了
* 真实值
* loss function
y = tf.placeholder(tf.float32)diff = linear_model - ysquared_deltas = tf.square(diff)loss = tf.reduce_sum(squared_deltas) #sumprint(sess.run(diff, {x:[1,2,3,4], y:[2,3,4,5]}))print(sess.run(squared_deltas, {x:[1,2,3,4], y:[2,3,4,5]}))print(sess.run(loss, {x:[1,2,3,4], y:[2,3,4,5]}))
[-2. -2.70000005 -3.4000001 -4.0999999 ][ 4. 7.29000044 11.56000042 16.80999947]39.66
我们可以修改变量的值,通过tf.assign
fixW = tf.assign(W, [-1.])fixb = tf.assign(b, [1.])print(sess.run([fixW, fixb]))print(sess.run(linear_model, {x : [1, 2, 3, 4]}))print(sess.run(diff, {x:[1,2,3,4], y:[2,3,4,5]}))print(sess.run(squared_deltas, {x:[1,2,3,4], y:[2,3,4,5]}))print(sess.run(loss, {x:[1,2,3,4], y:[2,3,4,5]}))
[array([-1.], dtype=float32), array([ 1.], dtype=float32)][ 0. -1. -2. -3.][-2. -4. -6. -8.][ 4. 16. 36. 64.]120.0
tf.train
tf提供梯度api来简化运算
TensorFlow can automatically produce derivatives given only a description of the model using the function tf.gradients
optimizer = tf.train.GradientDescentOptimizer(0.01)train = optimizer.minimize(loss)
sess.run(init)
for i in range(1000): sess.run(train, {x:[1,2,3,4], y:[2,3,4,5]})
print(sess.run([W, b]))
[array([ 1.00000191], dtype=float32), array([ 0.9999944], dtype=float32)]
print(sess.run(loss, {x:[1,2,3,4], y:[2,3,4,5]}))
2.09326e-11
print(sess.run(linear_model, {x:[1,2,3,4], y:[2,3,4,5]}))
[ 1.9999963 2.99999809 4. 5.00000191]
阅读全文
0 0
- Tensor
- tensor
- Tensor
- Tensor
- Tensor
- Tensor Tensor
- Tensor(tf.Tensor)
- Metric tensor
- Tensor(张量)
- tensroflow:tensor
- tensor理解
- GR:tensor
- 什么是tensor
- tensor(张量)
- Tensor Transformations
- tensor.get_shape()
- Stress与Strain+Tensor
- Tensor product spline surfaces
- Leetcode第695题. Max Area of Island
- 对LSTM中M(Memory)的再思考
- 【腾讯TMQ】移动设备管理控制工具(STF)平台的正确搭建方式
- 51nod 1650 穿越无人区 (找规律)
- Java课程学习十一:图片匹配游戏
- Tensor
- 百度NLP架构与应用
- PE文件框架结构图
- LintCode之6 合并排序数组
- DevExpress 下TreeList下的节点复选框的显隐设置
- 【Java】【网络协议】Http,TCP/IP,Socket,XMMP
- Python 常用函数介绍及备忘
- 记录使用tensorflow实现大卷积核卷积的代码
- 十大基于Docker的开发工具