用TensorFlow可视化卷积层的方法

来源:互联网 发布:淘宝邮箱注册页面 编辑:程序博客网 时间:2024/05/15 20:08

深度学习中对于卷积层的可视化可以帮助理解卷积层的工作原理与训练状态,然而卷积层可视化的方法不只一种。最简单的方法即直接输出卷积核和卷积后的filter通道,成为图片。然而也有一些方法试图通过反卷积(转置卷积)了解卷积层究竟看到了什么。

在TensorFlow中,即使是最简单的直接输出卷积层的方法,网上的讲解也参差不齐,David 9 今天要把可运行的方法告诉大家,以免大家受到误导。

废话少说,最简单的方法在此:

如果你有一个卷积层,我们以Tensorflow自带的cifar-10训练为例子:

  1. with tf.variable_scope('conv1')as scope:
  2. kernel = _variable_with_weight_decay('weights',
  3. shape=[5,5,3,64],
  4. stddev=5e-2,
  5. wd=0.0)
  6. conv = tf.nn.conv2d(images, kernel, [1,1,1,1], padding='SAME')
  7. biases = _variable_on_cpu('biases',[64], tf.constant_initializer(0.0))
  8. pre_activation = tf.nn.bias_add(conv, biases)
  9. conv1 = tf.nn.relu(pre_activation, name=scope.name)
  10. _activation_summary(conv1)

不出所料的话你一定会有以上代码,这是第一层卷积层conv1的TensorFlow流图定义。显然这里conv1对象是卷积层的激活输出。我们要做的就是直接可视化输出。在这个scope中加上如下代码:

  1. with tf.variable_scope('visualization'):
  2. # scale weights to [0 1], type is still float
  3. x_min = tf.reduce_min(kernel)
  4. x_max = tf.reduce_max(kernel)
  5. kernel_0_to_1 =(kernel - x_min) / (x_max - x_min)
  6. # to tf.image_summary format [batch_size, height, width, channels]
  7. kernel_transposed = tf.transpose(kernel_0_to_1,[3,0,1,2])
  8. # this will display random 3 filters from the 64 in conv1
  9. tf.summary.image('conv1/filters', kernel_transposed, max_outputs=3)
  10. layer1_image1 = conv1[0:1, :, :, 0:16]
  11. layer1_image1 = tf.transpose(layer1_image1, perm=[3,1,2,0])
  12. tf.summary.image("filtered_images_layer1", layer1_image1, max_outputs=16)

即总体变为:

  1. with tf.variable_scope('conv1')as scope:
  2. kernel = _variable_with_weight_decay('weights',
  3. shape=[5,5,3,64],
  4. stddev=5e-2,
  5. wd=0.0)
  6. conv = tf.nn.conv2d(images, kernel, [1,1,1,1], padding='SAME')
  7. biases = _variable_on_cpu('biases',[64], tf.constant_initializer(0.0))
  8. pre_activation = tf.nn.bias_add(conv, biases)
  9. conv1 = tf.nn.relu(pre_activation, name=scope.name)
  10. _activation_summary(conv1)
  11. with tf.variable_scope('visualization'):
  12. # scale weights to [0 1], type is still float
  13. x_min = tf.reduce_min(kernel)
  14. x_max = tf.reduce_max(kernel)
  15. kernel_0_to_1 =(kernel - x_min) / (x_max - x_min)
  16. # to tf.image_summary format [batch_size, height, width, channels]
  17. kernel_transposed = tf.transpose(kernel_0_to_1,[3,0,1,2])
  18. # this will display random 3 filters from the 64 in conv1
  19. tf.summary.image('conv1/filters', kernel_transposed, max_outputs=3)
  20. layer1_image1 = conv1[0:1, :, :, 0:16]
  21. layer1_image1 = tf.transpose(layer1_image1, perm=[3,1,2,0])
  22. tf.summary.image("filtered_images_layer1", layer1_image1, max_outputs=16)

加入的功能是在TensorBoard中随机显示3张卷积核,并且,显示16张卷积后的输出filter通道。

知道讲解的是,这里的tf.transpose()方法,是转置方法。

  1. tf.transpose(layer1_image1, perm=[3,1,2,0])

这句代码表示把第0维和第3维调换,因为图片输出函数

  1. tf.summary.image()

需要输入维度的格式是(batch数,长,宽,彩色通道),而刚才卷积输出得到的是(batch数,长,宽,卷积通道), 现在的彩色通道是应该是空,现在batch数应该是刚才卷积输出的彩色通道数。

总之加了以上visualization 的scope之后,就能实时跑了。亲测可用。输出样例如下:

 

参考文献:

  1. http://stackoverflow.com/questions/35759220/how-to-visualize-learned-filters-on-tensorflow
  2. https://github.com/tensorflow/tensorflow/issues/842
  3. https://github.com/yosinski/deep-visualization-toolbox
  4. https://github.com/tensorflow/tensorflow/issues/908
  5. https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4
  6. https://gist.github.com/kukuruza/03731dc494603ceab0c5

source: http://nooverfit.com/wp/%E7%94%A8tensorflow%E5%8F%AF%E8%A7%86%E5%8C%96%E5%8D%B7%E7%A7%AF%E5%B1%82%E7%9A%84%E6%96%B9%E6%B3%95/#comment-900

原创粉丝点击