[Tensorflow] tensor张量基础(dtype,shape,variable,slice and join)

来源:互联网 发布:鬼泣4特别版优化补丁 编辑:程序博客网 时间:2024/06/05 08:39

一、tensor属性:tf.DType和tf.TensorShape

(1)tensor类型:tf.DType

#部分类型tf.float16: 16-bit half-precision floating-point.tf.float32: 32-bit single-precision floating-point.tf.float64: 64-bit double-precision floating-point.tf.int8: 8-bit signed integer.tf.uint8: 8-bit unsigned integer.tf.uint16: 16-bit unsigned integer.tf.int16: 16-bit signed integer.tf.int32: 32-bit signed integer.tf.int64: 64-bit signed integer.tf.bool: Boolean.tf.string: String.tf.resource: Handle to a mutable resource.

(2)tensor形状:shape

    三种类型:
  • Fully-known shape: 维度确定,每一维长度确定。  如:[768,100]
  • Partially-known shape: 维度确定,每一维长度不一定确定. 如:[None,768,None],None表示任意长度。
  • Unknown shape: 维度不确定,每一维长度不确定。如:None
   通过tf.reshape(shape)来修改tensor的 shape.
   获取Tensor的shape:
#ts为 tensor#获取shape,两种方法ts.get_shape()sess.run(tf.shape(ts))#获取ranksess.run(tf.rank(ts))


二、Tensor张量定义:随机张量,zero张量,常数,变量,Placeholder

(1)random tensor 随机张量 、zero张量

#constant#正太分布tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)#截断正太分布#values whose magnitude is more than 2 standard deviations from the mean are dropped and re-picked.tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)#均匀分布tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)#其他有random_suffle等,见官网#zero张量tf.zeros(shape, dtype=tf.float32, name=None)


(2)常量 tf.constant

#定义tf.constant(value, dtype=None, shape=None, name='Const', verify_shape=False)#(0) value: 一个值、或者一个数组。#(1) dtype类型#(2) shape:如[2,3,4]。不定义则为value的值。定义则从value中逐一取值填充。(不够则去value最后一个值,)#例子1,[[0.1,0.1,0.1],[0.1,0.1,0.1]]const=tf.constant(0.1,shape=[2,3])#例子2 [1,2,3,4,4,4]tf.constant([[1,2],[3,4]],shape=[1,6])

(3)占位符 tf.placeholer

#定义tf.placeholder(dtype, shape=None, name=None)#必须由feed_dict传入。

 (4)变量 tf.Variable

#When you create a Variable you pass a Tensor as its initial value to the Variable() constructor.#定义#tf.Variable(tensor,name)#例子:weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35),name="weights")biases = tf.Variable(tf.zeros([200]), name="biases")#直接通过一个variable 初始化另一个variablew2=tf.Variable(weights*2-0.1)
    Variable类型张量初始化:
sess.run(tf.global_variables_initializer())"""我的理解:在Tensor图进行forward 或back propagation时,此时图中所有的tensor都需要预先赋予其值。这个赋值操作需要我们显示声明。"""


    另外:关于variable的save和restore,见variable save & restore

三、slicing && joining 张量切割与合并

(1)tf.slice() tensor切割出一个子区域

#tf.slice 在n维空间中圈出一块子空间。#如:二维情况下,crop出子矩阵。tf.slice(input,     #a tensor,假设其为n维   begin,           #长度为n的一维数组,表示input中每一维的起始坐标   size,            #长度为n的一维数组,表示每一维的size   name=None)   #For example# 'input' is [[[1, 1, 1], [2, 2, 2]],#             [[3, 3, 3], [4, 4, 4]],#             [[5, 5, 5], [6, 6, 6]]]tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> [[[3, 3, 3],                                            [4, 4, 4]]]tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> [[[3, 3, 3]],                                           [[5, 5, 5]]]

(2)tf.concat() tensor沿着某一个维度合并(以下摘抄自官网,但是:实际编程时,参数concat_dim和values位置要交换,否则出错

#定义tf.concat(concat_dim, values, name='concat')#For example:t1 = [[1, 2, 3], [4, 5, 6]]t2 = [[7, 8, 9], [10, 11, 12]]tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]# tensor t3 with shape [2, 3]# tensor t4 with shape [2, 3]tf.shape(tf.concat(0, [t3, t4])) ==> [4, 3]tf.shape(tf.concat(1, [t3, t4])) ==> [2, 6]


(3)其他

#(1)一个tensor通过复制扩大tf.tile(input, multiples, name=None) # replicating input multiples times.#(2)paddingtf.pad(tensor, paddings, mode='CONSTANT', name=None) #pad a tensor#(3)多个同shape的 rank-r tensor合并和rank-(r+1)的tensortf.stack(values, axis=0, name='stack') #values: list of tensor




原创粉丝点击