tf.transpose (API r1.3)
来源:互联网 发布:淘宝发布宝贝预览 编辑:程序博客网 时间:2024/05/29 03:43
tf.transpose (API r1.3)
1. tf.transpose
transpose( a, perm=None, name='transpose')
Defined in tensorflow/python/ops/array_ops.py.
See the guides: Math > Matrix Math Functions, Tensor Transformations > Slicing and Joining
Transposes a. Permutes the dimensions according to perm.
The returned tensor's dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1...0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.
For example:
x = tf.constant([[1, 2, 3], [4, 5, 6]])tf.transpose(x) # [[1, 4] # [2, 5] # [3, 6]]# Equivalentlytf.transpose(x, perm=[1, 0]) # [[1, 4] # [2, 5] # [3, 6]]# 'perm' is more useful for n-dimensional tensors, for n > 2x = tf.constant([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]])# Take the transpose of the matrices in dimension-0tf.transpose(x, perm=[0, 2, 1]) # [[[1, 4], # [2, 5], # [3, 6]], # [[7, 10], # [8, 11], # [9, 12]]]
Args:
a: A Tensor.
perm: A permutation of the dimensions of a.
name: A name for the operation (optional).
Returns:
A transposed Tensor.
2. example 1
import tensorflow as tfimport numpy as npx1 = tf.constant([[1, 2, 3], [4, 5, 6]])y11 = tf.transpose(x1) # [[1, 4] # [2, 5] # [3, 6]]# Equivalentlyy12 = tf.transpose(x1, perm=[1, 0]) # [[1, 4] # [2, 5] # [3, 6]]# 'perm' is more useful for n-dimensional tensors, for n > 2x2 = tf.constant([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]])# Take the transpose of the matrices in dimension-0y021 = tf.transpose(x2, perm=[0, 2, 1]) # [[[1, 4], # [2, 5], # [3, 6]], # [[7, 10], # [8, 11], # [9, 12]]]y012 = tf.transpose(x2, perm=[0, 1, 2])y102 = tf.transpose(x2, perm=[1, 0, 2])y120 = tf.transpose(x2, perm=[1, 2, 0])y201 = tf.transpose(x2, perm=[2, 0, 1])y210 = tf.transpose(x2, perm=[2, 1, 0])with tf.Session() as sess: outputy11 = sess.run(y11) print("outputy11:") print(outputy11) print('\n') outputy12 = sess.run(y12) print("outputy12:") print(outputy12) print('\n') outputy021 = sess.run(y021) print("outputy021:") print(outputy021) print('\n') outputy012 = sess.run(y012) print("outputy012:") print(outputy012) print('\n') outputy102 = sess.run(y102) print("outputy102:") print(outputy102) print('\n') outputy120 = sess.run(y120) print("outputy120:") print(outputy120) print('\n') outputy201 = sess.run(y201) print("outputy201:") print(outputy201) print('\n') outputy210 = sess.run(y210) print("outputy210:") print(outputy210)
output:
outputy11:[[1 4] [2 5] [3 6]]outputy12:[[1 4] [2 5] [3 6]]outputy021:[[[ 1 4] [ 2 5] [ 3 6]] [[ 7 10] [ 8 11] [ 9 12]]]outputy012:[[[ 1 2 3] [ 4 5 6]] [[ 7 8 9] [10 11 12]]]outputy102:[[[ 1 2 3] [ 7 8 9]] [[ 4 5 6] [10 11 12]]]outputy120:[[[ 1 7] [ 2 8] [ 3 9]] [[ 4 10] [ 5 11] [ 6 12]]]outputy201:[[[ 1 4] [ 7 10]] [[ 2 5] [ 8 11]] [[ 3 6] [ 9 12]]]outputy210:[[[ 1 7] [ 4 10]] [[ 2 8] [ 5 11]] [[ 3 9] [ 6 12]]]Process finished with exit code 0
3. example 2
import tensorflow as tfimport numpy as npx_anchor = tf.constant([[[0, 1], [2, 3], [4, 5], [6, 7], [8, 9], [10,11]]], dtype=np.float32)y_anchor102 = tf.transpose(x_anchor, perm=[1, 0, 2])with tf.Session() as sess: input_anchor = sess.run(x_anchor) print("input_anchor.shape:") print(input_anchor.shape) print('\n') output_anchor102 = sess.run(y_anchor102) print("output_anchor102:") print(output_anchor102) print('\n') print("output_anchor102.shape:") print(output_anchor102.shape)
output:
input_anchor.shape:(1, 6, 2)output_anchor102:[[[ 0. 1.]] [[ 2. 3.]] [[ 4. 5.]] [[ 6. 7.]] [[ 8. 9.]] [[ 10. 11.]]]output_anchor102.shape:(6, 1, 2)Process finished with exit code 0
4. example 3
import tensorflow as tfimport numpy as np# 'perm' is more useful for n-dimensional tensors, for n > 2x = tf.constant([[[ 1, 2, 3], [ 4, 5, 6]], [[ 7, 8, 9], [10, 11, 12]]])# Take the transpose of the matrices in dimension-0y102 = tf.transpose(x, perm=[1, 0, 2])with tf.Session() as sess: inputx = sess.run(x) print("inputx.shape:") print(inputx.shape) print('\n') print("inputx:") print(inputx) print('\n') outputy102 = sess.run(y102) print("outputy102.shape:") print(outputy102.shape) print('\n') print("outputy102:") print(outputy102) print('\n') print("inputx[1, 0, 0] - 7:") print(inputx[1, 0, 0]) print("outputy102[0, 1, 0] - 7:") print(outputy102[0, 1, 0])
output:
inputx.shape:(2, 2, 3)inputx:[[[ 1 2 3] [ 4 5 6]] [[ 7 8 9] [10 11 12]]]outputy102.shape:(2, 2, 3)outputy102:[[[ 1 2 3] [ 7 8 9]] [[ 4 5 6] [10 11 12]]]inputx[1, 0, 0] - 7:7outputy102[0, 1, 0] - 7:7Process finished with exit code 0
x 中元素7的索引为[1, 0, 0].
tf.transpose(x, perm=[1, 0, 2])之后,outputy102中元素7的索引[0, 1, 0].
perm=[1, 0, 2]可以理解为第0维和第1维的元素坐标置换。
outputy102中索引[0, 0, 0]的元素(1)是未transpose之前x中索引[0, 0, 0]元素1.
outputy102中索引[0, 0, 1]的元素(2)是未transpose之前x中索引[0, 0, 1]元素2.
outputy102中索引[0, 0, 2]的元素(3)是未transpose之前x中索引[0, 0, 2]元素3.
outputy102中索引[1, 0, 0]的元素(4)是未transpose之前x中索引[0, 1, 0]元素4.
outputy102中索引[1, 0, 1]的元素(5)是未transpose之前x中索引[0, 1, 1]元素5.
outputy102中索引[1, 0, 2]的元素(6)是未transpose之前x中索引[0, 1, 2]元素6.
outputy102中索引[0, 1, 0]的元素(7)是未transpose之前x中索引[1, 0, 0]元素7.
outputy102中索引[0, 1, 1]的元素(8)是未transpose之前x中索引[1, 0, 1]元素8.
outputy102中索引[0, 1, 2]的元素(9)是未transpose之前x中索引[1, 0, 2]元素9.
outputy102中索引[1, 1, 0]的元素(10)是未transpose之前x中索引[1, 1, 0]元素10.
outputy102中索引[1, 1, 1]的元素(11)是未transpose之前x中索引[1, 1, 1]元素11.
outputy102中索引[1, 1, 2]的元素(12)是未transpose之前x中索引[1, 1, 2]元素12.
阅读全文
0 0
- tf.transpose (API r1.3)
- tf.reshape (API r1.3)
- tf.split (API r1.3)
- tf.concat (API r1.3)
- tf.matmul (API r1.3)
- tf.argmax (API r1.3)
- tf.reduce_sum (API r1.3)
- tf.equal (API r1.3)
- tf.cast (API r1.3)
- tf.transpose
- tf.transpose
- tf.transpose
- Tensorflow(r1.4)API--tf.truncated_normal()
- Tensorflow(r1.4)API--tf.nn.conv2d
- Tensorflow(r1.4)API--tf.nn.max_pool
- Tensorflow(r1.4)API--tf.nn.dropout
- Tensorflow(r1.4)API--tf.summary.scalar
- tf.transpose()函数
- TankWar游戏(图片版)----阶段八
- xliview多布局
- 这是我第一天来准备写博客。
- 九九乘法
- LeetCode题解 第九周
- tf.transpose (API r1.3)
- 归纳一下C语言
- ReTrofit请求post
- [BZOJ3931][CQOI2015]网络吞吐量(SPFA+网络最大流)
- 静态变量
- 【JSP开发】利用cookie实现商品浏览记录
- The Counting Problem UVA
- Android性能优化之Ubuntu安装Battery Historian分析电量
- React Native 键盘弹起、收起动画与输入框的动作(几乎)完美同步