#cs231n#Assignment2:BatchNormalization.ipynb

来源:互联网 发布:单词社交网络 网盘 编辑:程序博客网 时间:2024/05/01 04:17

根据自己的理解和参考资料实现了一下
BatchNormalization.ipynb

Batch Normalization

One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].

The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.

The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.

It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.

[3] Sergey Ioffe and Christian Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift”, ICML 2015.

# As usual, a bit of setupimport timeimport numpy as npimport matplotlib.pyplot as pltfrom cs231n.classifiers.fc_net import *from cs231n.data_utils import get_CIFAR10_datafrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_arrayfrom cs231n.solver import Solver%matplotlib inlineplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plotsplt.rcParams['image.interpolation'] = 'nearest'plt.rcParams['image.cmap'] = 'gray'# for auto-reloading external modules# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython%load_ext autoreload%autoreload 2def rel_error(x, y):  """ returns relative error """  return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.data = get_CIFAR10_data()for k, v in data.iteritems():  print '%s: ' % k, v.shape
X_val:  (1000, 3, 32, 32)X_train:  (49000, 3, 32, 32)X_test:  (1000, 3, 32, 32)y_val:  (1000,)y_train:  (49000,)y_test:  (1000,)

Batch normalization: Forward

In the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.

# Check the training-time forward pass by checking means and variances# of features both before and after batch normalization# Simulate the forward pass for a two-layer networkN, D1, D2, D3 = 200, 50, 60, 3X = np.random.randn(N, D1)W1 = np.random.randn(D1, D2)W2 = np.random.randn(D2, D3)a = np.maximum(0, X.dot(W1)).dot(W2)print 'Before batch normalization:'print '  means: ', a.mean(axis=0)print '  stds: ', a.std(axis=0)# Means should be close to zero and stds close to oneprint 'After batch normalization (gamma=1, beta=0)'a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})print '  mean: ', a_norm.mean(axis=0)print '  std: ', a_norm.std(axis=0)# Now means should be close to beta and stds close to gammagamma = np.asarray([1.0, 2.0, 3.0])beta = np.asarray([11.0, 12.0, 13.0])a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})print 'After batch normalization (nontrivial gamma, beta)'print '  means: ', a_norm.mean(axis=0)print '  stds: ', a_norm.std(axis=0)
Before batch normalization:  means:  [-2.87344914 -2.03847197  0.87613968]  stds:  [ 29.35220431  32.30320024  32.73609642]After batch normalization (gamma=1, beta=0)  mean:  [  1.83186799e-17  -5.10702591e-17  -1.66533454e-17]  std:  [ 0.99999966  0.99999969  0.99999969]After batch normalization (nontrivial gamma, beta)  means:  [ 11.  12.  13.]  stds:  [ 0.99999966  1.99999938  2.99999908]
# Check the test-time forward pass by running the training-time# forward pass many times to warm up the running averages, and then# checking the means and variances of activations after a test-time# forward pass.N, D1, D2, D3 = 200, 50, 60, 3W1 = np.random.randn(D1, D2)W2 = np.random.randn(D2, D3)bn_param = {'mode': 'train'}gamma = np.ones(D3)beta = np.zeros(D3)for t in xrange(50):  X = np.random.randn(N, D1)  a = np.maximum(0, X.dot(W1)).dot(W2)  batchnorm_forward(a, gamma, beta, bn_param)bn_param['mode'] = 'test'X = np.random.randn(N, D1)a = np.maximum(0, X.dot(W1)).dot(W2)a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)# Means should be close to zero and stds close to one, but will be# noisier than training-time forward passes.print 'After batch normalization (test-time):'print '  means: ', a_norm.mean(axis=0)print '  stds: ', a_norm.std(axis=0)
After batch normalization (test-time):  means:  [ 0.13311871 -0.03358003 -0.04147392]  stds:  [ 1.07270062  0.93973232  1.02181818]

Batch Normalization: backward

Now implement the backward pass for batch normalization in the function batchnorm_backward.

To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.

Once you have finished, run the following to numerically check your backward pass.

# Gradient check batchnorm backward passN, D = 4, 5x = 5 * np.random.randn(N, D) + 12gamma = np.random.randn(D)beta = np.random.randn(D)dout = np.random.randn(N, D)bn_param = {'mode': 'train'}fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0]fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0]dx_num = eval_numerical_gradient_array(fx, x, dout)da_num = eval_numerical_gradient_array(fg, gamma, dout)db_num = eval_numerical_gradient_array(fb, beta, dout)_, cache = batchnorm_forward(x, gamma, beta, bn_param)dx, dgamma, dbeta = batchnorm_backward(dout, cache)print 'dx error: ', rel_error(dx_num, dx)print 'dgamma error: ', rel_error(da_num, dgamma)print 'dbeta error: ', rel_error(db_num, dbeta)
dx error:  9.38819761007e-10dgamma error:  3.26616906097e-11dbeta error:  3.31224529856e-12

Batch Normalization: alternative backward

In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.

Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.

NOTE: You can still complete the rest of the assignment if you don’t figure this part out, so don’t worry too much if you can’t get it.

N, D = 100, 500x = 5 * np.random.randn(N, D) + 12gamma = np.random.randn(D)beta = np.random.randn(D)dout = np.random.randn(N, D)bn_param = {'mode': 'train'}out, cache = batchnorm_forward(x, gamma, beta, bn_param)t1 = time.time()dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)t2 = time.time()dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)t3 = time.time()print 'dx difference: ', rel_error(dx1, dx2)print 'dgamma difference: ', rel_error(dgamma1, dgamma2)print 'dbeta difference: ', rel_error(dbeta1, dbeta2)print 'speedup: %.2fx' % ((t2 - t1) / (t3 - t2))
dx difference:  1.08773935485e-12dgamma difference:  0.0dbeta difference:  0.0speedup: 1.61x

Fully Connected Nets with Batch Normalization

Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.

Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.

HINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.

N, D, H1, H2, C = 2, 15, 20, 30, 10X = np.random.randn(N, D)y = np.random.randint(C, size=(N,))for reg in [0, 3.14]:  print 'Running check with reg = ', reg  model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,                            reg=reg, weight_scale=5e-2, dtype=np.float64,                            use_batchnorm=True)  loss, grads = model.loss(X, y)  print 'Initial loss: ', loss  for name in sorted(grads):    f = lambda _: model.loss(X, y)[0]    grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)    print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))  if reg == 0: print
Running check with reg =  0Initial loss:  2.29847232828W1 relative error: 7.25e-05W2 relative error: 6.15e-06W3 relative error: 3.59e-10b1 relative error: 5.55e-09b2 relative error: 2.22e-03b3 relative error: 1.49e-10beta1 relative error: 3.44e-07beta2 relative error: 2.00e-09gamma1 relative error: 2.10e-08gamma2 relative error: 1.75e-09Running check with reg =  3.14Initial loss:  6.79729678378W1 relative error: 2.69e-04W2 relative error: 6.09e-05W3 relative error: 2.67e-08b1 relative error: 4.44e-08b2 relative error: 4.44e-08b3 relative error: 8.34e-11beta1 relative error: 7.45e-09beta2 relative error: 1.61e-08gamma1 relative error: 7.98e-09gamma2 relative error: 1.06e-08

Batchnorm for deep networks

Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.

# Try training a very deep net with batchnormhidden_dims = [100, 100, 100, 100, 100]num_train = 1000small_data = {  'X_train': data['X_train'][:num_train],  'y_train': data['y_train'][:num_train],  'X_val': data['X_val'],  'y_val': data['y_val'],}weight_scale = 2e-2bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)bn_solver = Solver(bn_model, small_data,                num_epochs=10, batch_size=50,                update_rule='adam',                optim_config={                  'learning_rate': 1e-3,                },                verbose=True, print_every=200)bn_solver.train()solver = Solver(model, small_data,                num_epochs=10, batch_size=50,                update_rule='adam',                optim_config={                  'learning_rate': 1e-3,                },                verbose=True, print_every=200)solver.train()
(Iteration 1 / 200) loss: 2.297931(Epoch 0 / 10) train acc: 0.146000; val_acc: 0.136000(Epoch 1 / 10) train acc: 0.324000; val_acc: 0.286000(Epoch 2 / 10) train acc: 0.406000; val_acc: 0.309000(Epoch 3 / 10) train acc: 0.468000; val_acc: 0.311000(Epoch 4 / 10) train acc: 0.533000; val_acc: 0.343000(Epoch 5 / 10) train acc: 0.616000; val_acc: 0.355000(Epoch 6 / 10) train acc: 0.667000; val_acc: 0.333000(Epoch 7 / 10) train acc: 0.683000; val_acc: 0.353000(Epoch 8 / 10) train acc: 0.689000; val_acc: 0.339000(Epoch 9 / 10) train acc: 0.735000; val_acc: 0.312000(Epoch 10 / 10) train acc: 0.784000; val_acc: 0.320000(Iteration 1 / 200) loss: 2.302619(Epoch 0 / 10) train acc: 0.124000; val_acc: 0.131000(Epoch 1 / 10) train acc: 0.194000; val_acc: 0.156000(Epoch 2 / 10) train acc: 0.294000; val_acc: 0.245000(Epoch 3 / 10) train acc: 0.329000; val_acc: 0.245000(Epoch 4 / 10) train acc: 0.388000; val_acc: 0.285000(Epoch 5 / 10) train acc: 0.408000; val_acc: 0.311000(Epoch 6 / 10) train acc: 0.433000; val_acc: 0.302000(Epoch 7 / 10) train acc: 0.533000; val_acc: 0.300000(Epoch 8 / 10) train acc: 0.571000; val_acc: 0.305000(Epoch 9 / 10) train acc: 0.635000; val_acc: 0.290000(Epoch 10 / 10) train acc: 0.640000; val_acc: 0.326000

Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.

plt.subplot(3, 1, 1)plt.title('Training loss')plt.xlabel('Iteration')plt.subplot(3, 1, 2)plt.title('Training accuracy')plt.xlabel('Epoch')plt.subplot(3, 1, 3)plt.title('Validation accuracy')plt.xlabel('Epoch')plt.subplot(3, 1, 1)plt.plot(solver.loss_history, 'o', label='baseline')plt.plot(bn_solver.loss_history, 'o', label='batchnorm')plt.subplot(3, 1, 2)plt.plot(solver.train_acc_history, '-o', label='baseline')plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')plt.subplot(3, 1, 3)plt.plot(solver.val_acc_history, '-o', label='baseline')plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')for i in [1, 2, 3]:  plt.subplot(3, 1, i)  plt.legend(loc='upper center', ncol=4)plt.gcf().set_size_inches(15, 15)plt.show()

这里写图片描述

Batch normalization and initialization

We will now run a small experiment to study the interaction of batch normalization and weight initialization.

The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.

# Try training a very deep net with batchnormhidden_dims = [50, 50, 50, 50, 50, 50, 50]num_train = 1000small_data = {  'X_train': data['X_train'][:num_train],  'y_train': data['y_train'][:num_train],  'X_val': data['X_val'],  'y_val': data['y_val'],}bn_solvers = {}solvers = {}weight_scales = np.logspace(-4, 0, num=20)for i, weight_scale in enumerate(weight_scales):  print 'Running weight scale %d / %d' % (i + 1, len(weight_scales))  bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)  model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)  bn_solver = Solver(bn_model, small_data,                  num_epochs=10, batch_size=50,                  update_rule='adam',                  optim_config={                    'learning_rate': 1e-3,                  },                  verbose=False, print_every=200)  bn_solver.train()  bn_solvers[weight_scale] = bn_solver  solver = Solver(model, small_data,                  num_epochs=10, batch_size=50,                  update_rule='adam',                  optim_config={                    'learning_rate': 1e-3,                  },                  verbose=False, print_every=200)  solver.train()  solvers[weight_scale] = solver
Running weight scale 1 / 20Running weight scale 2 / 20Running weight scale 3 / 20Running weight scale 4 / 20Running weight scale 5 / 20Running weight scale 6 / 20Running weight scale 7 / 20Running weight scale 8 / 20Running weight scale 9 / 20Running weight scale 10 / 20Running weight scale 11 / 20Running weight scale 12 / 20Running weight scale 13 / 20Running weight scale 14 / 20Running weight scale 15 / 20Running weight scale 16 / 20Running weight scale 17 / 20Running weight scale 18 / 20Running weight scale 19 / 20Running weight scale 20 / 20
# Plot results of weight scale experimentbest_train_accs, bn_best_train_accs = [], []best_val_accs, bn_best_val_accs = [], []final_train_loss, bn_final_train_loss = [], []for ws in weight_scales:  best_train_accs.append(max(solvers[ws].train_acc_history))  bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))  best_val_accs.append(max(solvers[ws].val_acc_history))  bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))  final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))  bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))plt.subplot(3, 1, 1)plt.title('Best val accuracy vs weight initialization scale')plt.xlabel('Weight initialization scale')plt.ylabel('Best val accuracy')plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')plt.legend(ncol=2, loc='lower right')plt.subplot(3, 1, 2)plt.title('Best train accuracy vs weight initialization scale')plt.xlabel('Weight initialization scale')plt.ylabel('Best training accuracy')plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')plt.legend()plt.subplot(3, 1, 3)plt.title('Final training loss vs weight initialization scale')plt.xlabel('Weight initialization scale')plt.ylabel('Final training loss')plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')plt.legend()plt.gcf().set_size_inches(10, 15)plt.show()

这里写图片描述

Question:

Describe the results of this experiment, and try to give a reason why the experiment gave the results that it did.

Answer:

太小的weight_scale会导致每一层的激活数据值都太小,最后输出数据比较接近,而且反向传播的时候梯度也会比较小,所以每一步迭代的变化量会特别小。BN层可以有效降低对weight initializaiton的依赖。即使一开始weight_scale值很小最后也会被化为正态分布,使得weight被缩放为相同的数量级,而且weight的方差为1,在梯度传播中不容易出现weight部分太大部分太小的问题(这可能会知道分类不准确)。
但如果weight_scale太大了,weight就会很大,lr如果保持原来相对比较大的数值,那么很容易就使得loss发散了,acc就会降低了。

参考资料

http://www.jianshu.com/p/9c4396653324
http://blog.csdn.net/xieyi4650/article/category/6498212

0 0