tensorflow编程: Layers (contrib)
来源:互联网 发布:楼房平面图设计软件 编辑:程序博客网 时间:2024/05/23 23:37
Higher level ops for building neural network layers
tf.contrib.layers.batch_norm
添加一个 Batch Normalization 层 。
tf.contrib.layers.batch_norm (inputs, decay=0.999, updates_collections=tf.GraphKeys.UPDATE_OPS, is_training=True, data_format=DATA_FORMAT_NHWC)
可用作conv2d和fully_connected的归一化函数。
tf.nn.conv2d_transpose
conv2d 的转置
tf.conv2d_transpose (value, filter, output_shape, strides, padding=’SAME’, data_format=’NHWC’, name=None)
# -*- coding: utf-8 -*-import tensorflow as tfdef func(in_put, in_channel, out_channel): weights = tf.get_variable(name="weights", shape=[2, 2, in_channel, out_channel], initializer=tf.contrib.layers.xavier_initializer_conv2d()) convolution = tf.nn.conv2d(input=in_put, filter=weights, strides=[1, 1, 1, 1], padding='VALID') conv_shape = convolution.get_shape().as_list() deconv_shape = [conv_shape[0], conv_shape[1]*2, conv_shape[2]*2, conv_shape[3]] deconvolution = tf.nn.conv2d_transpose(value=convolution, filter=weights, output_shape=deconv_shape, strides=[1, 2, 2, 1], padding='VALID') return in_put, convolution, deconvolutiondef main(): with tf.Graph().as_default(): input_x = tf.placeholder(dtype=tf.float32, shape=[1, 4, 4, 1]) in_put, convolution, deconvolution = func(input_x, 1, 1) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) import numpy as np _in_put, _convolution, _deconvolution = sess.run([in_put, convolution, deconvolution], feed_dict={input_x:np.random.uniform(low=0, high=255, size=[1, 4, 4, 1])}) print '\nin_put:' print in_put # print _in_put print '\nconvolution:' print convolution # print _convolution print '\ndeconvolution:' print deconvolution # print _deconvolutionif __name__ == "__main__": main()
2017-09-29 09:51:41.472842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)in_put:Tensor("Placeholder:0", shape=(1, 4, 4, 1), dtype=float32)convolution:Tensor("Conv2D:0", shape=(1, 3, 3, 1), dtype=float32)deconvolution:Tensor("conv2d_transpose:0", shape=(1, 6, 6, 1), dtype=float32)Process finished with exit code 0
tf.nn.dropout
tf.nn.dropout (x, keep_prob, noise_shape=None, seed=None, name=None)
# coding=utf-8import tensorflow as tfdef main(): with tf.Graph().as_default(): import numpy as np input_x = np.random.uniform(0, 255, [3, 3]) print input_x drop = [0, 0, 0] for i, keep_prob in enumerate([0.1, 0.5, 1.0]): drop[i] = tf.nn.dropout(x=input_x, keep_prob=keep_prob) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for drop_i in drop: _drop_i = sess.run(drop_i) print '\n----------\n' print _drop_iif __name__ == "__main__": main()
# 初始输入[[ 16.46278229 253.27597997 246.33614039] [ 130.45261984 227.85971767 142.72621045] [ 173.23025953 165.99906514 180.13238617]]2017-09-29 11:02:29.146976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)----------# keep_prob = 0.1[[ 0. 0. 0. ] [ 0. 0. 0. ] [ 0. 0. 1801.3238617]]----------# keep_prob = 0.5[[ 32.92556457 506.55195994 0. ] [ 260.90523969 455.71943533 0. ] [ 346.46051906 0. 360.26477234]]----------# keep_prob = 1.0[[ 16.46278229 253.27597997 246.33614039] [ 130.45261984 227.85971767 142.72621045] [ 173.23025953 165.99906514 180.13238617]]
tf.contrib.layers.fully_connected
tf.contrib.layers.fully_connected (inputs, num_outputs, activation_fn=tf.nn.relu)
- 其中默认进行了
convolution
和activation
。 convolution
中,只对输入的最后一维求平均,且阶数不变。即 ‘weights:0’.shape=[inputs.shape[-1], num_outputs])。- ‘weights:0’.shape 永远是二维的。
num_outputs
是 ‘weights:0’ 第二维(即第-1维
)的参数值;经过fn计算后,也变成了 结果输出的tensor 的最后一维(即第-1维
)的参数值。- 如果设置
activation_fn=None
,则输出结果 不经过激活,依然可能包含负数值。
# coding=utf-8import tensorflow as tffrom tensorflow.contrib.layers import fully_connecteddef main(): with tf.Graph().as_default(): import numpy as np input_x = np.random.uniform(0, 10, [3, 3]) print input_x fn = fully_connected(inputs=input_x, num_outputs=1) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) _fn = sess.run(fn) print _fn print '\n----------\n' for (x, y) in zip(tf.global_variables(), sess.run(tf.global_variables())): print '\n', x, '\n', yif __name__ == "__main__": main()
# 原始输入矩阵[[ 7.73305319 0.2780667 7.27101124] [ 0.84666041 0.92980727 6.83676724] [ 1.02844109 5.51824496 1.78840816]]2017-09-29 11:33:02.500942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)# fn后输出的结果tensor[[ 2.32549239] [ 1.69284669] [ 0. ]]----------<tf.Variable 'fully_connected/weights:0' shape=(3, 1) dtype=float64_ref> [[-0.01048241] [-0.83954232] [ 0.36308597]]<tf.Variable 'fully_connected/biases:0' shape=(1,) dtype=float64_ref> [ 0.]Process finished with exit code 0
# coding=utf-8import tensorflow as tffrom tensorflow.contrib.layers import fully_connecteddef main(): with tf.Graph().as_default(): import numpy as np input_x = np.random.uniform(0, 10, [2, 4, 4, 3]) print np.shape(input_x) fn = fully_connected(inputs=input_x, num_outputs=1) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) _fn = sess.run(fn) print np.shape(_fn) print '\n----------\n' for i in tf.global_variables(): print '\n', iif __name__ == "__main__": main()
(2, 4, 4, 3)2017-09-29 11:46:17.248114: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1052] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)# convolution 中,只对输入的最后一维求平均,且阶数不变。即 'weights:0'.shape=[inputs.shape[-1], num_outputs])。(2, 4, 4, 1)----------<tf.Variable 'fully_connected/weights:0' shape=(3, 1) dtype=float64_ref><tf.Variable 'fully_connected/biases:0' shape=(1,) dtype=float64_ref>
tf.nn.relu
max(features, 0)
tf.nn.relu(features, name=None)
tf.nn.relu6
min(max(features, 0), 6)。即对 tf.nn.relu
的优化,防止 relu过后 某些 极端值 依然 大于6
tf.nn.relu6(features, name=None)
tf.nn.softmax
计算公式: softmax = exp(logits) / reduce_sum(exp(logits), dim)
tf.nn.softmax (logits, dim=-1, name=None)
# coding=utf-8import tensorflow as tfimport numpy as npinput_x = tf.constant(np.random.uniform(0, 5, [2, 3]))softmax = tf.nn.softmax(logits=input_x)# 自己写的softmax接口def my_softmax(logits, dim=-1): my_softmax = tf.div(tf.exp(logits), tf.reduce_sum(tf.exp(logits), dim, keep_dims=True)) return my_softmaxwith tf.Session() as sess: sess.run(tf.global_variables_initializer()) my_softmax = my_softmax(logits=input_x) _input_x, _softmax, _my_softmax = sess.run([input_x, softmax, my_softmax]) print 'input:', np.shape(input_x), input_x, '\n', _input_x print '\n----------\n' print 'softmax:', np.shape(_softmax), softmax, '\n', _softmax print '\n----------\n' print 'my_softmax:', np.shape(_my_softmax), my_softmax, '\n', _my_softmax print '\n----------\n' for i in tf.global_variables(): print '\n', i
# softmax输入类型为tensor型input: (2, 3) Tensor("Const:0", shape=(2, 3), dtype=float64) [[ 3.88517858 3.69402461 3.07837121] [ 1.27162028 2.12622856 4.34646188]]----------# softmax后,shape保持不变,返回tensor型softmax: (2, 3) Tensor("Softmax:0", shape=(2, 3), dtype=float64) [[ 0.44008545 0.36351295 0.1964016 ] [ 0.04000495 0.09402978 0.86596527]]----------# 根据计算公式:softmax = exp(logits) / reduce_sum(exp(logits), dim) 得到了相同的接口输出my_softmax: (2, 3) Tensor("div:0", shape=(2, 3), dtype=float64) [[ 0.44008545 0.36351295 0.1964016 ] [ 0.04000495 0.09402978 0.86596527]]----------# 内存中无参数保存Process finished with exit code 0
Regularizers
tf.contrib.layers.l1_regularizer
tf.contrib.layers.l1_regularizer (scale, scope=None)
tf.contrib.layers.l2_regularizer
tf.contrib.layers.l2_regularizer (scale, scope=None)
Initializers
tf.contrib.layers.xavier_initializer
执行“Xavier”初始化的初始化程序。
tf.contrib.layers.xavier_initializer (uniform=True, seed=None, dtype=tf.float32)
# coding=utf-8import tensorflow as tfxavier = tf.get_variable(name="weights", shape=[2, 2], initializer=tf.contrib.layers.xavier_initializer())constant = tf.get_variable(name='biases', shape=[2], initializer=tf.constant_initializer())with tf.Session() as sess: sess.run(tf.global_variables_initializer()) _xavier, _constant = sess.run([xavier, constant]) print '\n\nxavier:' print xavier print _xavier print '\n\nconstant:' print constant print _constant
xavier:<tf.Variable 'weights:0' shape=(2, 2) dtype=float32_ref>[[ 1.20015538 0.34742999] [ 0.39075291 0.60076308]]constant:<tf.Variable 'biases:0' shape=(2,) dtype=float32_ref>[ 0. 0.]
import tensorflow as tfprint '\n\ntf.contrib.layers.xavier_initializer_conv2d() :\n', tf.contrib.layers.xavier_initializer_conv2d()print '\n\ntf.constant_initializer() :\n', tf.constant_initializer()print '\n\ntf.global_variables_initializer() :\n', tf.global_variables_initializer()
tf.contrib.layers.xavier_initializer_conv2d() :<function _initializer at 0x7fe5133da578>tf.constant_initializer() :<tensorflow.python.ops.init_ops.Constant object at 0x7fe528bbdfd0>tf.global_variables_initializer() :name: "init"op: "NoOp"
Optimization
Summaries
Feature columns
- tensorflow编程: Layers (contrib)
- Tensorflow contrib.layers 模块介绍
- tensorflow图片归一化之tf.layers.batch_normalization/tf.nn.batch_normalization/tf.contrib.layers.batch_norm
- tensorflow学习——tf.layers.batch_normalization/tf.nn.batch_normalization/tf.contrib.layers.batch_norm
- tf.contrib.layers.xavier_initializer
- tf.contrib.layers.embed_sequence
- tensorflow/contrib/slim
- 【Tensorflow】tensorflow.contrib.slim 包
- tensorflow.layers.batch_normalization使用方法
- tensorflow slim layers
- tensorflow tf.layers.dense 实例
- 如何使用tensorflow.layers.con2d_transpose
- 【Tensorflow slim】slim layers包
- TensorFlow 学习(十四)—— contrib
- #tensorflow学习笔记#tf.contrib.framework.get_or_create_global_step
- tensorflow之tf.contrib.learn Quickstart
- TensorFlow-4: tf.contrib.learn 快速入门
- tensorflow(一):tf.contrib.seq2seq.GreedyEmbeddingHelper
- 中国用户,到底想要什么样的智能音箱?
- jQuery,attr(i,origValue)中i和origValue参数怎么传来的?
- BZOJ 3410 [Usaco2009 Dec]Selfish Grazing 自私的食草者 贪心
- Genymotion Android stdio 的配置
- Sublime Text构建node.js环境
- tensorflow编程: Layers (contrib)
- JAVA学习(1)
- eclipse快捷键大全
- influxdb初体验
- myeclipse启动的过程中没提示就自动退出,闪退的有效解决方法
- Java模板模式
- struts中的一些常用配置
- Scheme环境搭建
- 面试题10:Vasya