深度学习笔记——深度学习框架TensorFlow(十二)

来源:互联网 发布:如何清理地图数据 编辑:程序博客网 时间:2024/06/05 05:55

参考网站:https://my.oschina.net/yilian/blog/661900

TensorBoard面板可视化管理

TensorBoard是TensorFlow自带的可视化面板,既可以显示网络结构,又可以显示训练和测试过程中各层参数的变化情况。
1.tensorflow网络可视化,操作,对于每一步op形成的无向图,我们可以用tensorflow可视化,因为tensorflow默认每次回话有一个默认的graph对象graph.ref。
所以:

summary_writer = tf.train.SummaryWriter('/tmp/tensorflowlogs',graph_def = sess.graph_def)

如果我们需要记录图标训练时候的参数,我们需要添加相应的op操作,上面的summary_writer操作是将将来训练时候的op过程记录到日志里面,便于后面显示graph。
因为变量很多,所以可以定义scope范围来管理,节点树一样展开查看。
下面介绍几个方法,记录op以及汇总op的方法。
- 4D数据虚拟化 :
tensorflow比如CNN每一步生成的特征,也就是用图像虚拟化ndarry的特征,比如我们可以看到每一步cnn得到的特征是什么。比如下图,车子每一步得到的CNN特征,用图像方式虚拟化显示image就是:

images = np.random.randint(256,size=shape).astype(np.uint8)tf.summary.image("Visulize_image",images)

这里写图片描述
展示的样子如下:
这里写图片描述
- 图标变化方式展示tensorflow数据特征:

w_hist = tf.summary.histogram("weights",W)b_hist = tf.summary.histogram("biases",b)y_hist = tf.summary.histogram("y",y)

大概下面这个样子:
这里写图片描述
- 以变量批次事件变化的线路图显示表示tensorflow数据:
比如下面记录精度的变化曲线图:

accuracy_summary = tf.summary.scalar("accuracy",accuracy)

展示的样子大概如下:
这里写图片描述
因为我们上面这些op操作各种各样 多个 所以需要有个汇总的op操作
该操作会汇总上面的所有summary

merged = tf.merge_all_summaries()

最后在session中执行汇总summary操作op 并使用上面sumarywriter 写入更新到graph 日志中。

summary_all = sess.run(merged_summary_op,feed_dict={x:batch_xs,y:batch_ys})summary_writer.add_summary(summary_all,i)
  • 举个栗子:
import data.input_data as input_datamnist = input_data.read_data_sets("MNIST_data/",one_hot=True)import tensorflow as tf# Parameterslearning_rate = 0.001training_iters = 200000batch_size = 64display_step = 20# Network Parametersn_input = 784 # MNIST data input (img shape: 28*28)n_classes = 10 # MNIST total classes (0-9 digits)dropout = 0.8 # Dropout, probability to keep units# tf Graph inputx = tf.placeholder(tf.float32, [None, n_input])y = tf.placeholder(tf.float32, [None, n_classes])keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)# Create custom modeldef conv2d(name, l_input, w, b):    return tf.nn.relu(tf.nn.bias_add(tf.nn.conv2d(l_input,w,strides=[1,1,1,1],padding='SAME'),b),name = name)def max_pool(name, l_input, k):    return tf.nn.max_pool(l_input,ksize=[1,k,k,1],strides=[1,k,k,1],padding='SAME',name=name)def norm(name, l_input, lsize=4):    return tf.nn.lrn(l_input,lsize,bias=1.0,alpha=0.001/9.0,beta=0.75,name=name)def customnet(_X, _weights, _biases, _dropout):    # Reshape input picture    _X = tf.reshape(_X,shape=[-1,28,28,1])    # Convolution Layer    conv1 = conv2d('conv1',_X,_weights['wc1'],_biases['bc1'])    # Max Pooling (down-sampling)    pool1 = max_pool('pool1',conv1,k=2)    # Apply Normalization    norm1 = norm('norm1', pool1, lsize=4)    # Apply Dropout    norm1 = tf.nn.dropout(norm1, _dropout)    #tf.summary.image("conv1",conv1)    # Convolution Layer    conv2 = conv2d('conv2',norm1,_weights['wc2'],_biases['bc2'])    pool2 = max_pool('pool2', conv2, k=2)    norm2 = norm('norm2',pool2,lsize=4)    norm2 = tf.nn.dropout(norm2,_dropout)    # Convolution Layer    conv3 = conv2d('conv3',norm2,_weights['wc3'],_biases['bc3'])    # Max Pooling (down-sampling)    pool3 = max_pool('pool3', conv3, k=2)    # Apply Normalization    norm3 = norm('norm3',pool3,lsize = 4)    # Apply Dropout    norm3 = tf.nn.dropout(norm3, _dropout)    #conv4    conv4 = conv2d('conv4',norm3,_weights['wc4'],_biases['bc4'])    # Max Pooling (down-sampling)    pool4 = max_pool('pool4', conv4, k=2)    # Apply Normalization    norm4 = norm('norm4',pool4,lsize=4)    # Apply Dropout    norm4 = tf.nn.dropout(norm4,_dropout)    # Fully connected layer    dense1 = tf.reshape(norm4,[-1,_weights['wd1'].get_shape().as_list()[0]])    dense1 = tf.nn.relu(tf.matmul(dense1,_weights['wd1'])+_biases['bd1'],name='fc1')    dense2 = tf.nn.relu(tf.matmul(dense1,_weights['wd2'])+_biases['bd2'],name='fc2')    # Output, class prediction    out = tf.matmul(dense2,_weights['out'])+_biases['out']    return out# Store layers weight & biasweights = {    'wc1': tf.Variable(tf.random_normal([3, 3, 1, 64])),    'wc2': tf.Variable(tf.random_normal([3, 3, 64, 128])),    'wc3': tf.Variable(tf.random_normal([3, 3, 128, 256])),    'wc4': tf.Variable(tf.random_normal([2, 2, 256, 512])),    'wd1': tf.Variable(tf.random_normal([2*2*512, 1024])),     'wd2': tf.Variable(tf.random_normal([1024, 1024])),    'out': tf.Variable(tf.random_normal([1024, 10]))}biases = {    'bc1': tf.Variable(tf.random_normal([64])),    'bc2': tf.Variable(tf.random_normal([128])),    'bc3': tf.Variable(tf.random_normal([256])),    'bc4': tf.Variable(tf.random_normal([512])),    'bd1': tf.Variable(tf.random_normal([1024])),    'bd2': tf.Variable(tf.random_normal([1024])),    'out': tf.Variable(tf.random_normal([n_classes]))}# Construct modelpred = customnet(x, weights, biases, keep_prob)# Define loss and optimizercost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred))optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)# Evaluate modelcorrect_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))# Initializing the variablesinit = tf.initialize_all_variables()tf.summary.scalar("loss", cost)tf.summary.scalar("accuracy",accuracy)merged_summary_op = tf.summary.merge_all()# Launch the graphwith tf.Session() as sess:    sess.run(init)    step = 1    summary_writer = tf.summary.FileWriter('/tmp/logs',sess.graph)    # Keep training until reach max iterations    while step * batch_size < training_iters:        batch_xs, batch_ys = mnist.train.next_batch(batch_size)        # Fit training using batch data        sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys, keep_prob: dropout})        if step % display_step == 0:            # Calculate batch accuracy            acc = sess.run(accuracy, feed_dict={x: batch_xs, y: batch_ys, keep_prob: 1.})            # Calculate batch loss            loss = sess.run(cost, feed_dict={x: batch_xs, y: batch_ys, keep_prob: 1.})            print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + "{:.6f}".format(loss) + ", Training Accuracy= " + "{:.5f}".format(acc))            summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys,keep_prob:dropout})            summary_writer.add_summary(summary_str, step)        step += 1    print ("Optimization Finished!")    # Calculate accuracy for 256 mnist test images    print ("Testing Accuracy:", sess.run(accuracy, feed_dict={x: mnist.test.images[:256], y: mnist.test.labels[:256], keep_prob: 1.}))
阅读全文
0 0
原创粉丝点击