cs231n assignment1 svm

来源:互联网 发布:通货膨胀率 知乎 编辑:程序博客网 时间:2024/05/21 22:58

根据训练数据计算损失和权重矩阵W的梯度的函数,svm_loss_naive(W, X, y, reg)
cs231课件
关于损失函数L以及在不同模型上的得分向量S的理解,参见了知乎上翻译的官方课程笔记,知乎cs231n官方讲义翻译
python代码填补:

def svm_loss_naive(W, X, y, reg):  """  Structured SVM loss function, naive implementation (with loops).  Inputs have dimension D, there are C classes, and we operate on minibatches  of N examples.  Inputs:  - W: A numpy array of shape (D, C) containing weights.  - X: A numpy array of shape (N, D) containing a minibatch of data.  - y: A numpy array of shape (N,) containing training labels; y[i] = c means    that X[i] has label c, where 0 <= c < C.  - reg: (float) regularization strength  Returns a tuple of:  - loss as single float  - gradient with respect to weights W; an array of same shape as W  """  dW = np.zeros(W.shape) # initialize the gradient as zero  # compute the loss and the gradient  num_classes = W.shape[1]  num_train = X.shape[0]  loss = 0.0  for i in range(num_train):    scores = X[i].dot(W) #矩阵乘法    correct_class_score = scores[y[i]] #Syi    '''    print("x.shape:",X.shape) #(500, 3073) 矩阵    print("X[i].shape:",X[i].shape) #(3073,) 向量    print("scores's shape:",scores.shape)#(10,) 向量    '''    for j in range(num_classes):      if j == y[i]:        continue      margin = scores[j] - correct_class_score + 1 # note delta = 1,超参数      if margin > 0:#在真实标签上的模型得分与该分类模型上得分差距不满足大于delta时计算损失        loss += margin        # Compute gradients (one inner and one outer sum)        # Wonderfully compact and hard to read        dW[:, y[i]] -= X[i, :].T # this is really a sum over j != y_i        dW[:, j] += X[i, :].T # sums each contribution of the x_i's  # Right now the loss is a sum over all training examples, but we want it  # to be an average instead so we divide by num_train.  loss /= num_train  dW /= num_train  '''   print("w*w:",(W*W).shape)#(3073, 10)  print("np.sum(w*w)",(np.sum(W*W)).shape)#标量 矩阵W*W每个元素的和  '''  # Add regularization to the loss.  loss += 0.5 * reg * np.sum(W * W)  # Gradient regularization that carries through per https://piazza.com/class/i37qi08h43qfv?cid=118  dW += reg*W  #############################################################################  # TODO:                                                                     #  # Compute the gradient of the loss function and store it dW.                #  # Rather that first computing the loss and then computing the derivative,   #  # it may be simpler to compute the derivative at the same time that the     #  # loss is being computed. As a result you may need to modify some of the    #  # code above to compute the gradient.                                       #  #############################################################################  return loss, dW

刚开始看示例代码的时候有3个困惑:

  1. W权重矩阵的大小为3073*10,是如何确定的?
    10是分类类别,3073是数据集的特征数目3072再加上1,为什么加上1?
    以下是知乎上翻译的笔记里的内容,知乎CS231n官方笔记翻译——线性分类器上
    这里写图片描述
    这里写图片描述
    这里写图片描述
    所以说W中多出来的一列是偏差b的值,为此输入数据也要多增加一个全为1的行。
  2. np.sum(W*W)计算的是什么?
    计算正则化损失,但是没有加reg,求得的是W的每个元素平方后求和的一个标量值。
  3. 为什么要计算梯度?梯度用于更新权重W是如何做到的?
    在train函数里,每次迭代中用grad乘以learning_rate来更新权重。

向量化的实现:
向量化的实现当中,首先要理解公式,
这里写图片描述
1. 首先计算已有的W计算出来的所有图片在所有类别上的得分scores,根据公式

S=Wx,将权重矩阵和输入数据进行矩阵乘法,得到500*10的scores矩阵,scores每一行代表的是该输入图片在10个类别模型上的得分。
  1. 计算scores_correct,即式子中的Syi,即scores矩阵中每一行中y真实类别标签对应的列,得到的是一个500*1的向量
  2. 计算margins,即scores和scores_correct的差距,maximum操作是为了筛选差距,如果在其他不正确的模型上的得分和真实类别对应的模型上的得分之差大于了Delta,说明模型在这个图片上的区分度很好,不用计算损失,但是如果两者之差小于了Delta,即两者得分之差加上Delta(阈值)仍然大于0,那么就需要计算损失了
    这里写图片描述
    python代码如下:
def svm_loss_vectorized(W, X, y, reg):  """  Structured SVM loss function, vectorized implementation.  Inputs and outputs are the same as svm_loss_naive.  """  loss = 0.0  dW = np.zeros(W.shape) # initialize the gradient as zero  #############################################################################  # TODO:                                                                     #  # Implement a vectorized version of the structured SVM loss, storing the    #  # result in loss.                                                           #  #############################################################################  scores=X.dot(W) #(500,10)  num_classes=W.shape[1]  num_train=X.shape[0]  scores_correct = scores[np.arange(num_train), y]  #(500,) has to reshape, or will be ValueError  scores_correct=np.reshape(scores_correct,(num_train,-1))  margins=scores-scores_correct+1#delta=1  margins=np.maximum(0,margins)  margins[np.arange(num_train),y]=0#在计算loss时不把真实标签对应的得分之差delta计进去,即公式中j!=yi  loss=np.sum(margins)/num_train  loss += 0.5 * reg * np.sum(W * W)  # compute the gradient  margins[margins > 0] = 1  row_sum = np.sum(margins, axis=1)                  # 1 by N  margins[np.arange(num_train), y] = -row_sum          dW += np.dot(X.T, margins)/num_train + reg * W     # D by C  #############################################################################  #                             END OF YOUR CODE                              #  #############################################################################  #############################################################################  # TODO:                                                                     #  # Implement a vectorized version of the gradient for the structured SVM     #  # loss, storing the result in dW.                                           #  #                                                                           #  # Hint: Instead of computing the gradient from scratch, it may be easier    #  # to reuse some of the intermediate values that you used to compute the     #  # loss.                                                                     #  #############################################################################  pass  #############################################################################  #                             END OF YOUR CODE                              #  #############################################################################  return loss, dW

其中关于np的函数有几点总结:

  1. np.sum(a)
    a若为矩阵,但是如果没有带参数,就是求解的矩阵所有元素之和
  2. np.reshape(a,newshape)
    对矩阵的大小重新进行定义,a是原始矩阵,newshape是新的大小,若为二维矩阵,就是(num_x,num_y)形式

  3. np.maximum(x1,x2)和np.max(x)的区别:
    前者是比较X1和X2的各个对应元素中最大的一个,参数至少是两个,也可以是标量,比如这里代码用到的比较margins中各个元素与0中最大的一个,就可以将X1写作0,后者需要指定行或者列,得到的是行或者列中最大的数。


训练和预测:
使用随机梯度下降(stochastic gradient descent)来进行训练,每次迭代使用一部分样本,进行损失和梯度的求解,并更新权重矩阵,

def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,            batch_size=200, verbose=False):    """    Train this linear classifier using stochastic gradient descent.    Inputs:    - X: A numpy array of shape (N, D) containing training data; there are N      training samples each of dimension D.    - y: A numpy array of shape (N,) containing training labels; y[i] = c      means that X[i] has label 0 <= c < C for C classes.    - learning_rate: (float) learning rate for optimization.    - reg: (float) regularization strength.    - num_iters: (integer) number of steps to take when optimizing    - batch_size: (integer) number of training examples to use at each step.    - verbose: (boolean) If true, print progress during optimization.    Outputs:    A list containing the value of the loss function at each training iteration.    """    num_train, dim = X.shape    num_classes = np.max(y) + 1 # assume y takes values 0...K-1 where K is number of classes    if self.W is None:      # lazily initialize W      self.W = 0.001 * np.random.randn(dim, num_classes)    # Run stochastic gradient descent to optimize W    loss_history = []    for it in range(num_iters):      X_batch = None      y_batch = None      #########################################################################      # TODO:                                                                 #      # Sample batch_size elements from the training data and their           #      # corresponding labels to use in this round of gradient descent.        #      # Store the data in X_batch and their corresponding labels in           #      # y_batch; after sampling X_batch should have shape (dim, batch_size)   #      # and y_batch should have shape (batch_size,)                           #      #                                                                       #      # Hint: Use np.random.choice to generate indices. Sampling with         #      # replacement is faster than sampling without replacement.              #      #########################################################################      sample_indices=np.random.choice(num_train,batch_size,replace=False)      X_batch=X[sample_indices,:]      y_batch=y[sample_indices]      #########################################################################      #                       END OF YOUR CODE                                #      #########################################################################      # evaluate loss and gradient      loss, grad = self.loss(X_batch, y_batch, reg)      loss_history.append(loss)      # perform parameter update      #########################################################################      # TODO:                                                                 #      # Update the weights using the gradient and the learning rate.          #      #########################################################################      # perform parameter update      self.W += -learning_rate * grad      #########################################################################      #                       END OF YOUR CODE                                #      #########################################################################      if verbose and it % 100 == 0:        print ('iteration %d / %d: loss %f' % (it, num_iters, loss))    return loss_history

预测:即根据数据和训练求得的权重得到得分WX,并选取所有类别中得分最高的一项作为该图片经过所有模型计算得到的最可能的类别:

def predict(self, X):    """    Use the trained weights of this linear classifier to predict labels for    data points.    Inputs:    - X: D x N array of training data. Each column is a D-dimensional point.    Returns:    - y_pred: Predicted labels for the data in X. y_pred is a 1-dimensional      array of length N, and each element is an integer giving the predicted      class.    """    y_pred = np.zeros(X.shape[1])    ###########################################################################    # TODO:                                                                   #    # Implement this method. Store the predicted labels in y_pred.            #    ###########################################################################    '''scores=X.dot(self.W)    y_pred=np.max(scores,axis=0)'''    y_pred = np.argmax(np.dot(self.W.T, X.T), axis=0)#不是max,不是得到的值而是类别下标,用argmax,0表示按列,1表示按行    ###########################################################################    #                           END OF YOUR CODE                              #    ###########################################################################    return y_pred

这里用到的一个取最大值索引的np函数:np.argmax,因为这里列的索引就对应着标签类别(0-9,一共10个类),最开始错误地使用了max函数。axis=0表示按列找最大值,axis=1表示按行找最大值,此处WX的列对应类别,因此axis=1。

交叉验证选取超参数,学习率learning_rate和正则系数regularization_strengths:

# Use the validation set to tune hyperparameters (regularization strength and# learning rate). You should experiment with different ranges for the learning# rates and regularization strengths; if you are careful you should be able to# get a classification accuracy of about 0.4 on the validation set.learning_rates = [1e-7, 5e-5]regularization_strengths = [5e4, 1e5]# results is dictionary mapping tuples of the form# (learning_rate, regularization_strength) to tuples of the form# (training_accuracy, validation_accuracy). The accuracy is simply the fraction# of data points that are correctly classified.results = {}best_val = -1   # The highest validation accuracy that we have seen so far.best_svm = None # The LinearSVM object that achieved the highest validation rate.################################################################################# TODO:                                                                        ## Write code that chooses the best hyperparameters by tuning on the validation ## set. For each combination of hyperparameters, train a linear SVM on the      ## training set, compute its accuracy on the training and validation sets, and  ## store these numbers in the results dictionary. In addition, store the best   ## validation accuracy in best_val and the LinearSVM object that achieves this  ## accuracy in best_svm.                                                        ##                                                                              ## Hint: You should use a small value for num_iters as you develop your         ## validation code so that the SVMs don't take much time to train; once you are ## confident that your validation code works, you should rerun the validation   ## code with a larger value for num_iters.                                      #################################################################################num_iters=5000for lr in learning_rates:    for re in regularization_strengths:        svm = LinearSVM()        svm.train(X_train, y_train,learning_rate=lr,reg=re,num_iters=num_iters, verbose=True)        y_train_pred = svm.predict(X_train)        training_accuracy=np.mean(y_train == y_train_pred)        y_validation_pred = svm.predict(X_val)        validation_accuracy=np.mean(y_validation_pred == y_val)        key=(lr,re)        value=(training_accuracy, validation_accuracy)        results[key]=value        if validation_accuracy>best_val:            best_val=validation_accuracy            print("--------------best_val:",best_val)            best_svm=svm#################################################################################                              END OF YOUR CODE                                ################################################################################## Print out results.for lr, reg in sorted(results):    train_accuracy, val_accuracy = results[(lr, reg)]    print ('lr %e reg %e train accuracy: %f val accuracy: %f' % (                lr, reg, train_accuracy, val_accuracy))print ('best validation accuracy achieved during cross-validation: %f' % best_val)

使用了两个for循环,对learning_rate和regularization_strengths的所有组合进行训练,分别取得在训练集和验证集上的准确率,并记录最大的验证集上的准确率对应的SVM模型。这里迭代次数设成太低(几十)是不行的,(具体原因?
我设成了5000,但似乎太大了,loss都成了NaN,交叉验证得到的在验证集上的准确率是37.4%,在原始数据的测试集上是36.1%。比KNN效果(30%)要好。但是教程说应该40%以上(如果改进?):

lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.371327 val accuracy: 0.374000lr 1.000000e-07 reg 1.000000e+05 train accuracy: 0.351286 val accuracy: 0.369000lr 5.000000e-05 reg 5.000000e+04 train accuracy: 0.100265 val accuracy: 0.087000lr 5.000000e-05 reg 1.000000e+05 train accuracy: 0.100265 val accuracy: 0.087000best validation accuracy achieved during cross-validation: 0.374000

此时迭代次数为5000对应的一些输出:
这里写图片描述
这里写图片描述


softMax分类器:
交叉熵损失
等价交叉熵损失

与SVM的区别在于损失的表达,知乎官方笔记线性分类器下
softMax和svm区别

################################################################################# TODO:                                                                        ## Use the validation set to set the learning rate and regularization strength. ## This should be identical to the validation that you did for the SVM; save    ## the best trained softmax classifer in best_softmax.                          #################################################################################num_iters=1200for lr in learning_rates:    for re in regularization_strengths:        softMax = Softmax()        softMax.train(X_train, y_train,learning_rate=lr,reg=re,num_iters=num_iters, verbose=True)        y_train_pred = softMax.predict(X_train)        training_accuracy=np.mean(y_train == y_train_pred)        y_validation_pred = softMax.predict(X_val)        validation_accuracy=np.mean(y_validation_pred == y_val)        key=(lr,re)        value=(training_accuracy, validation_accuracy)        results[key]=value        if validation_accuracy>best_val:            best_val=validation_accuracy            print("--------------best_val:",best_val)            best_softmax=softMax#################################################################################                              END OF YOUR CODE                                #################################################################################
def softmax_loss_naive(W, X, y, reg):  """  Softmax loss function, naive implementation (with loops)  Inputs have dimension D, there are C classes, and we operate on minibatches  of N examples.  Inputs:  - W: A numpy array of shape (D, C) containing weights.  - X: A numpy array of shape (N, D) containing a minibatch of data.  - y: A numpy array of shape (N,) containing training labels; y[i] = c means    that X[i] has label c, where 0 <= c < C.  - reg: (float) regularization strength  Returns a tuple of:  - loss as single float  - gradient with respect to weights W; an array of same shape as W  """  # Initialize the loss and gradient to zero.  loss = 0.0  dW = np.zeros_like(W)  #############################################################################  # TODO: Compute the softmax loss and its gradient using explicit loops.     #  # Store the loss in loss and the gradient in dW. If you are not careful     #  # here, it is easy to run into numeric instability. Don't forget the        #  # regularization!                                                           #  #############################################################################  num_train=X.shape[0]  num_classes=W.shape[1]  for i in range(num_train):          f=X[i].dot(W)          f -= np.max(f) #prevents numerical instability          f_correct=f[y[i]]          exp_sum=np.sum(np.exp(f))          loss+=-f_correct+np.log(exp_sum)          dW[:, y[i]]-= X[i]#?          for j in range(num_classes):              dW[:,j] += (np.exp(f[j]) / exp_sum) * X[i]  loss /= num_train  loss += 0.5 * reg * np.sum(W * W)      dW /= num_train  dW += reg * W     #############################################################################  #                          END OF YOUR CODE                                 #  #############################################################################  return loss, dW
def softmax_loss_vectorized(W, X, y, reg):  """  Softmax loss function, vectorized version.  Inputs and outputs are the same as softmax_loss_naive.  """  # Initialize the loss and gradient to zero.  loss = 0.0  dW = np.zeros_like(W)  #############################################################################  # TODO: Compute the softmax loss and its gradient using no explicit loops.  #  # Store the loss in loss and the gradient in dW. If you are not careful     #  # here, it is easy to run into numeric instability. Don't forget the        #  # regularization!                                                           #  #############################################################################  num_train = X.shape[0]  num_classes = W.shape[1]  scores = X.dot(W)  scores -= np.max(scores, axis = 1)[:, np.newaxis]  exp_scores = np.exp(scores)  sum_exp_scores = np.sum(exp_scores, axis = 1)  correct_class_score = scores[range(num_train), y]  loss = np.sum(np.log(sum_exp_scores)) - np.sum(correct_class_score)  exp_scores = exp_scores / sum_exp_scores[:,np.newaxis]  # maybe here can be rewroten into matrix operations   for i in range(num_train):    dW += exp_scores[i] * X[i][:,np.newaxis]    dW[:, y[i]] -= X[i]  loss /= num_train  loss += 0.5 * reg * np.sum( W*W )  dW /= num_train  dW += reg * W  #############################################################################  #                          END OF YOUR CODE                                 #  #############################################################################  return loss, dW

测试结果:

lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.327551 val accuracy: 0.337000lr 1.000000e-07 reg 1.000000e+08 train accuracy: 0.100265 val accuracy: 0.087000lr 5.000000e-07 reg 5.000000e+04 train accuracy: 0.332837 val accuracy: 0.342000lr 5.000000e-07 reg 1.000000e+08 train accuracy: 0.100265 val accuracy: 0.087000best validation accuracy achieved during cross-validation: 0.342000softmax on raw pixels final test set accuracy: 0.343000

权重可视化:

# Visualize the learned weights for each classw = best_softmax.W[:-1,:] # strip out the biasw = w.reshape(32, 32, 3, 10)w_min, w_max = np.min(w), np.max(w)classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']for i in range(10):  plt.subplot(2, 5, i + 1)  # Rescale the weights to be between 0 and 255  wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)  plt.imshow(wimg.astype('uint8'))  plt.axis('off')  plt.title(classes[i])

权重可视化

原创粉丝点击