cs231n-2017-assignments2-TensorFlow.ipynb 心得体会

来源:互联网 发布:怎样安装ubuntu双系统 编辑:程序博客网 时间:2024/05/22 03:24

新手菜鸟级别,大神请绕道

今天较好的完成了cs231n-2017-assignments2-TensorFlow.ipynb中的最后一个部分:构建一个model对cifar数据集进行训练,今天的精度在train上达到0.98,在val上达到0.82,明天继续进行优化。

具体代码见TensorFlow.ipynb

以下仅贴出model:

def my_model(X,y,is_training):        # [conv-relu-conv-relu-pool]  out=14x14    conv1 = tf.layers.conv2d(X,128,kernel_size=[3,3],strides=(1,1),activation=tf.nn.relu)    ba1   = tf.layers.batch_normalization(conv1,training=is_training)    conv2 = tf.layers.conv2d(ba1,256,[3,3],activation=tf.nn.relu)    ba2   = tf.layers.batch_normalization(conv2,training=is_training)    pool1 = tf.layers.max_pooling2d(ba2,pool_size=[2,2],strides=2)    #[conv-relu-conv-relu-pool]  out=5x5    conv3 = tf.layers.conv2d(pool1,512,[3,3],activation=tf.nn.relu)    ba3   = tf.layers.batch_normalization(conv3,training=is_training)    conv4 = tf.layers.conv2d(ba3,256,[3,3],activation=tf.nn.relu)    ba4   = tf.layers.batch_normalization(conv4,training=is_training)    pool2 = tf.layers.max_pooling2d(ba4,pool_size=[2,2],strides=2)    #[dense-relu]x2 layer    pool2_flat = tf.reshape(pool2,[-1,5*5*256])    dense1 =tf.layers.dense(pool2_flat,units=512,activation=tf.nn.relu)    ba5 = tf.layers.batch_normalization(dense1,center=False,scale=False,training=is_training)    dropout1 = tf.layers.dropout(ba5,training=is_training)    dense2 = tf.layers.dense(dropout1,units=128,activation=tf.nn.relu)    ba6 = tf.layers.batch_normalization(dense2,center=False,scale=False,training=is_training)    dropout2 = tf.layers.dropout(ba6,training=is_training)    #logit out    logits = tf.layers.dense(dropout2,units=10)    return logits
tf.reset_default_graph()X = tf.placeholder(tf.float32, [None, 32, 32, 3])y = tf.placeholder(tf.int64, [None])is_training = tf.placeholder(tf.bool)y_out = my_model(X,y,is_training)total_loss= tf.losses.softmax_cross_entropy(tf.one_hot(y,10),y_out)+tf.losses.get_regularization_loss()mean_loss = tf.reduce_mean(total_loss)optimizer = tf.train.RMSPropOptimizer(1e-3,decay=0.90,momentum=0.1)# batch normalization in tensorflow requires this extra dependencyextra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)with tf.control_dependencies(extra_update_ops):    train_step = optimizer.minimize(mean_loss)


心得体会:

1.模型很小时,即使是在train时,也很容易过拟合:几个epoch之后train_acc开始下降

2.模型很大时,为解决过拟合问题,在dense层加入了dropout,在loss中加入了regularization

3.没有batchnorm时,val_acc只能勉强到0.7,加入batchnorm后,直接到了0.8。这里要注意conv后接spatial batchnorm;而dense后接vanilla batchnorm

4.经常要计算conv的输出size,较为麻烦,编了个python的小程序,还挺方便

5.遇到问题,多查tensorflow官网

原创粉丝点击