TensorLayer (TuneLayer) 实现 DropConnect
来源:互联网 发布:挣钱的软件 编辑:程序博客网 时间:2024/05/01 11:26
DropConnect 是 Hinton DropOut 之后出现的一种 Regularization 方法,相比DropOut的好处是,它在模型较小的情况下依然能保证准确度。苦于网络上很难找到python例子,这里贴出一个用 TensorLayer 实现的代码,以供参考。
Paper of DropConnect
import tensorflow as tf
import tensorlayer as tl # 注意!!! TensorLayer 现在更名为 TuneLayer 了 https://github.com/zsdonghao/tunelayer
from tensorlayer.layers import set_keep
import numpy as np
import time
X_train, y_train, X_val, y_val, X_test, y_test = \
tl.files.load_mnist_dataset(shape=(-1,784))
sess = tf.InteractiveSession()
# placeholder
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropconnectDenseLayer(network, keep = 0.8,n_units=800, act = tf.nn.relu,
name='dropconnect_relu1')
network = tl.layers.DropconnectDenseLayer(network, keep = 0.5,
n_units=800, act = tf.nn.relu,
name='dropconnect_relu2')
network = tl.layers.DropconnectDenseLayer(network, keep = 0.5,
n_units=10,
act = tl.activation.identity,
name='output_layer')
y = network.outputs
y_op = tf.argmax(tf.nn.softmax(y), 1)
cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(y, y_))
params = network.all_params
# train
n_epoch = 500
batch_size = 128
learning_rate = 0.0001
print_freq = 10
train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999,
epsilon=1e-08, use_locking=False).minimize(cost)
sess.run(tf.initialize_all_variables()) # initialize all variables
network.print_params()
network.print_layers()
print(' learning_rate: %f' % learning_rate)
print(' batch_size: %d' % batch_size)
for epoch in range(n_epoch):
start_time = time.time()
for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train,
batch_size, shuffle=True):
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update( network.all_drop ) # enable all dropout/dropconnect/denoising layers
sess.run(train_op, feed_dict=feed_dict)
if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
print("Epoch %d of %d took %fs" % (epoch + 1, n_epoch, time.time() - start_time))
dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable all dropout/dropconnect/denoising layers
feed_dict = {x: X_train, y_: y_train}
feed_dict.update(dp_dict)
print(" train loss: %f" % sess.run(cost, feed_dict=feed_dict))
dp_dict = tl.utils.dict_to_one( network.all_drop )
feed_dict = {x: X_val, y_: y_val}
feed_dict.update(dp_dict)
print(" val loss: %f" % sess.run(cost, feed_dict=feed_dict))
print(" val acc: %f" % np.mean(y_val ==
sess.run(y_op, feed_dict=feed_dict)))
try:
# You can visualize the weight of 1st hidden layer as follow.
tl.visualize.W(network.all_params[0].eval(), second=10,
saveable=True, shape=[28, 28],
name='w1_'+str(epoch+1), fig_idx=2012)
# You can also save the weight of 1st hidden layer to .npz file.
# tl.files.save_npz([network.all_params[0]] , name='w1'+str(epoch+1)+'.npz')
except:
raise Exception("You should change visualize_W(), if you want \
to save the feature images for different dataset")
print('Evaluation')
dp_dict = tl.utils.dict_to_one( network.all_drop )
feed_dict = {x: X_test, y_: y_test}
feed_dict.update(dp_dict)
print(" test loss: %f" % sess.run(cost, feed_dict=feed_dict))
print(" test acc: %f" % np.mean(y_test == sess.run(y_op,
feed_dict=feed_dict)))
tl.files.save_npz(network.all_params , name='model.npz')
tl.files.save_npz([network.all_params[0]] , name='model.npz')
# Then, restore the parameters as follow.
# load_params = tl.utils.load_npz(path='', name='model.npz')
# In the end, close TensorFlow session.
sess.close()
- TensorLayer (TuneLayer) 实现 DropConnect
- 如何在 MNIST 实现 CNN (tensorlayer, TuneLayer 实现)
- Word2vec 快熟实现之 TuneLayer (TensorLayer) 和 TensorFlow 篇
- TuneLayer 实现 stacked denoising autoencoder
- TensorLayer
- dropout 的快熟实现笔记 --tensorlayer
- TensorLayer MNIST
- Dropout 与 DropConnect
- DropConnect简单理解
- 神经网络Trick之DropConnect
- dropout、dropconnect、maxout、batch normalization
- 如何安装 TensorLayer
- TensorLayer 如何重复使用 variable
- TensorFlow 和 TensorLayer
- TensorFlow与TensorLayer
- TensorLayer的安装
- Keras TFLearn TensorLayer
- Tensorlayer 安装更新
- C/C++在文件指定位置插入字符串或者空行
- The Long Article Which I Can Show You
- 物联网大数据量频繁对MongoDB查询问题
- 欢迎使用CSDN-markdown编辑器
- Java for Android
- TensorLayer (TuneLayer) 实现 DropConnect
- JDBC(二)Statement,PrepareStatement和ResultSet
- Oracle 10g字符集问题
- Latex文件分别用Texwork和Winedt打开时,产生中文乱码的解决方法
- Java中取得数组(array),集合(Collection)和字符串(String)的长度
- RecyclerView下拉刷新上拉加载
- App Webview远程调试学习小记
- gdb 调试
- Codeforces Round #364 (Div. 2), problem: (B) Cells Not Under Attack