tensorflow中的卷积和池化解释

来源:互联网 发布:淘宝利润空间怎么填 编辑:程序博客网 时间:2024/05/16 15:17

首先,卷积和池化的基本概念就不用多说了,写这个东西的太多了,这里主要说说tensorflow中的相关内容。


再看看tensorflow中关于这两个函数的接口定义:

tf.nn.conv2d(    input,#输入的数据size=[batch, in_height, in_width, in_channels]    filter,#卷积核size=[filter_height, filter_width, in_channels, out_channels],in_channels是输入卷积核通道数
#相同,out_channels是输出卷积结果个数    strides,#strides[0]=strides[3]=1,而strides[1],strides[2]分别是x,y方向上的位移    padding,#"SAME", "VALID"    use_cudnn_on_gpu=None,    data_format=None,    name=None)

bias_add(    value,    bias,    data_format=None,    name=None)
avg_pool(    value,#[batch, height, width, channels]    ksize,#The size of the window for each dimension of the input    strides,#The stride of the sliding window for each dimension    padding,#"SAME", "VALID"    data_format='NHWC',#'NHWC' and 'NCHW'     name=None)


给出的示例代码:

import tensorflow as tfimport numpy as npMat=np.array([[[1],[4],[-2],[7],[2]],[[3],[8],[1],[0],[3]],[[5],[-6],[-1],[4],[0]],[[7],[-2],[4],[0],[1]],[[2],[6],[1],[3],[-1]]])print("The matrix is:",Mat)filtKernel=tf.get_variable("weight",[2,2,1,1],initializer=tf.constant_initializer([[1,-2],[0,4]]))biases=tf.get_variable("biases",[1],initializer=tf.constant_initializer(2))Mat=np.asarray(Mat, dtype='float32')Mat=Mat.reshape(1,5,5,1)x=tf.placeholder('float32',[1,None,None,1])conv=tf.nn.conv2d(x,filtKernel,strides=[1,2,2,1],padding="SAME")bias=tf.nn.bias_add(conv,biases)pools=tf.nn.avg_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')with tf.Session() as sess:tf.global_variables_initializer().run()conM=sess.run(conv,feed_dict={x:Mat})#print("converlution matrix: ",conM)conMbias=sess.run(bias,feed_dict={x:Mat})#print("converlution with bias: ",conMbias)pool=sess.run(pools,feed_dict={x:Mat})print("the pooling result is: ",pool)

运行结果截图:

卷积



池化


另外,给出部分结果解析:

卷积



池化结果


0 0
原创粉丝点击