CS231_A1:Softmax

来源:互联网 发布:网络约车平台哪家好 编辑:程序博客网 时间:2024/06/05 05:45

参考:http://blog.csdn.net/xs1997/article/details/75949043

膜拜膜拜。



Softmax



按照指示,依旧是先载入数据集。

Train data shape:  (49000, 3073)
Train labels shape:  (49000,)
Validation data shape:  (1000, 3073)
Validation labels shape:  (1000,)
Test data shape:  (1000, 3073)
Test labels shape:  (1000,)
dev data shape:  (500, 3073)
dev labels shape:  (500,)


然后补全损失的计算公式,这里计算的是交叉熵损失。

import numpy as npfrom random import shuffledef softmax_loss_naive(W, X, y, reg):  """  Softmax loss function, naive implementation (with loops)  Inputs have dimension D, there are C classes, and we operate on minibatches  of N examples.  Inputs:  - W: A numpy array of shape (D, C) containing weights.  - X: A numpy array of shape (N, D) containing a minibatch of data.  - y: A numpy array of shape (N,) containing training labels; y[i] = c means    that X[i] has label c, where 0 <= c < C.  - reg: (float) regularization strength  Returns a tuple of:  - loss as single float  - gradient with respect to weights W; an array of same shape as W  """  # Initialize the loss and gradient to zero.  loss = 0.0  dW = np.zeros_like(W)  #############################################################################  # TODO: Compute the softmax loss and its gradient using explicit loops.     #  # Store the loss in loss and the gradient in dW. If you are not careful     #  # here, it is easy to run into numeric instability. Don't forget the        #    
#############################################################################  num_train, dim = X.shape  num_class = W.shape[1]    for i in range(num_train):      scores = X[i].dot(W)      shift_scores = scores - max(scores)  #data stability      Li = -shift_scores[y[i]] + np.log(np.sum(np.exp(shift_scores)))      loss += Li      for j in range(num_class):          output = np.exp(shift_scores[j])/np.sum(np.exp(shift_scores))          if j==y[i]:              dW[:,j] += (-1 + output) * X[i]          else:              dW[:,j] += output * X[i]    loss /= num_train  loss += 0.5 * reg * np.sum(W * W)  # 加上正则  dW /= num_train  dW += reg * W      #############################################################################  #                          END OF YOUR CODE                                 #  #############################################################################
  return loss, dW

reg=0时,计算此时损失。
loss: 2.348136
sanity check: 2.302585

发现这个数字和-log(0.1)比较接近,因为数据集中对应10个classes,每个类的图片张数都是相同的,所以相当于1/10啦。


梯度检查,分别在reg=0和1e2的情况下。

结果:

1
numerical: -3.846592 analytic: -3.846592, relative error: 1.973510e-08
numerical: -1.159778 analytic: -1.159778, relative error: 2.735084e-08
numerical: -0.049621 analytic: -0.049621, relative error: 1.439176e-06
numerical: 2.765077 analytic: 2.765077, relative error: 3.663603e-09
numerical: 0.229068 analytic: 0.229068, relative error: 1.597017e-07
numerical: -0.443602 analytic: -0.443603, relative error: 2.859726e-08
numerical: 1.068643 analytic: 1.068643, relative error: 5.343658e-08
numerical: -2.863785 analytic: -2.863786, relative error: 1.980570e-08
numerical: -1.617723 analytic: -1.617723, relative error: 1.486114e-08
numerical: 0.095104 analytic: 0.095104, relative error: 1.478881e-07
2
numerical: -0.698316 analytic: -0.698316, relative error: 4.565008e-08
numerical: 2.410562 analytic: 2.410562, relative error: 1.025836e-08
numerical: 0.462818 analytic: 0.462818, relative error: 6.981234e-09
numerical: 3.274987 analytic: 3.274987, relative error: 2.023751e-08
numerical: 1.732840 analytic: 1.732840, relative error: 1.975346e-08
numerical: 0.092836 analytic: 0.092836, relative error: 2.728099e-07
numerical: -1.147061 analytic: -1.147061, relative error: 4.169556e-08
numerical: 0.022323 analytic: 0.022323, relative error: 9.982905e-07
numerical: 2.642040 analytic: 2.642040, relative error: 5.937875e-09
numerical: -0.303487 analytic: -0.303487, relative error: 5.382778e-08


补全softmax_loss_vectorized的算法~

def softmax_loss_vectorized(W, X, y, reg):  """  Softmax loss function, vectorized version.  Inputs and outputs are the same as softmax_loss_naive.  """  # Initialize the loss and gradient to zero.  loss = 0.0  dW = np.zeros_like(W)  #############################################################################  # TODO: Compute the softmax loss and its gradient using no explicit loops.  #  # Store the loss in loss and the gradient in dW. If you are not careful     #  # here, it is easy to run into numeric instability. Don't forget the        #  # regularization!                                                           #  #############################################################################  num_train, dim = X.shape  num_class = W.shape[1]    scores = X.dot(W)  shift_scores = scores - np.max(scores,axis=1).reshape(-1,1)  output = np.exp(shift_scores)/np.sum(np.exp(shift_scores),axis=1).reshape(-1,1)  loss = -np.sum(np.log(output[range(num_train),list(y)]))   loss /= float(num_train)  loss += 0.5 * reg * np.sum(W * W)    dS = output.copy()  dS[range(num_train),list(y)] += -1  dW = (X.T).dot(dS)  dW = dW/num_train + reg * W  #############################################################################  #                          END OF YOUR CODE                                 #  #############################################################################  return loss, dW


跑一下结果发现速度上还是有很大差异滴。

naive loss: 2.396191e+00 computed in 0.125010s
vectorized loss: 2.396191e+00 computed in 0.000000s



接下来对超参数的结果进行验证,选择最优参数。

# Use the validation set to tune hyperparameters (regularization strength and# learning rate). You should experiment with different ranges for the learning# rates and regularization strengths; if you are careful you should be able to# get a classification accuracy of over 0.35 on the validation set.from cs231n.classifiers import Softmaxresults = {}best_val = -1best_softmax = Nonelearning_rates = [1e-7, 5e-7]regularization_strengths = [5e4, 1e8]################################################################################# TODO:                                                                        ## Use the validation set to set the learning rate and regularization strength. ## This should be identical to the validation that you did for the SVM; save    ## the best trained softmax classifer in best_softmax.                          #################################################################################hypara = [(x,y) for x in learning_rates for y in regularization_strengths]for Lrate,regS in hypara:    softmax = Softmax()    loss_hist = softmax.train(X_train,y_train,Lrate,regS,                         900,verbose=False)    y_train_pred = softmax.predict(X_train)    accuracy_train = np.mean(y_train == y_train_pred)    y_val_pred = softmax.predict(X_val)    accuracy_val = np.mean(y_val == y_val_pred)    results[(Lrate,regS)] = (accuracy_train , accuracy_val)    if(best_val < accuracy_val):        best_val = accuracy_val        best_softmax = softmax#################################################################################                              END OF YOUR CODE                                #################################################################################    # Print out results.for lr, reg in sorted(results):    train_accuracy, val_accuracy = results[(lr, reg)]    print ('lr %e reg %e train accuracy: %f val accuracy: %f' % (                lr, reg, train_accuracy, val_accuracy))    print ('best validation accuracy achieved during cross-validation: %f' % best_val)

结果:

lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.331673 val accuracy: 0.338000
lr 1.000000e-07 reg 1.000000e+08 train accuracy: 0.100265 val accuracy: 0.087000
lr 5.000000e-07 reg 5.000000e+04 train accuracy: 0.326184 val accuracy: 0.350000
lr 5.000000e-07 reg 1.000000e+08 train accuracy: 0.100265 val accuracy: 0.087000
best validation accuracy achieved during cross-validation: 0.350000


# evaluate on test set# Evaluate the best softmax on test sety_test_pred = best_softmax.predict(X_test)test_accuracy = np.mean(y_test == y_test_pred)print ('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))

结果:

softmax on raw pixels final test set accuracy: 0.349000


还是看一下可视化结果: