CS231n Module 1
来源:互联网 发布:一年有多少程序员猝死 编辑:程序博客网 时间:2024/06/06 14:20
- Image Classification
- Introduction
- Nearest Neighbor Classifier
- K-Nearest Neighbor Classifier
- SummaryApplying kNN in practice
- Linear Classification
Image Classification
Introduction
这个部分介绍了图片识别问题。图片识别问题是将一张输入图片标识上固定类别标签的过程。
举的是一个识别喵星人图片的例子:
总的来说,计算机来分类图片的模型会遇到以下几个问题:
这里用于解决问题的办法并不是一个明确的算法,比如排序算法,而是用大量的training dataset来训练计算机识别物体,这种过程叫做data-driven approach。
比如一个数据集如下:
图像分类的任务是用一些像素点来代替一张图片并且标识出其标签来。整个流程可以归纳如下:
Nearest Neighbor Classifier
这部分介绍了Nearest Neighbor Classifier。虽然这个分类器和CNN没有半毛钱关系,也很少用在实际中,但是它能让我们知道对于解决图片分类问题的基本想法。
首先介绍一个image classification dataset:CIFAR-10。
http://www.cs.toronto.edu/~kriz/cifar.html
这个数据集由60000张32*32像素的图片组成,共10个类别。其中training set大小为50000,test set大小为10000。
见下图:
现在假设我们有了这50000张图片,我们想要标注出其余10000张图片的标签。Nearest Neighbor Classifier将会这样做:
取出一张待标注的图片,将其与50000张图片一一比较,然后根据他们间的“距离”来标注其标签。
他们的“距离”是这样定义的:
这个例子中,这两张图片维度是32*32*3的矩阵。假设为向量
例子如下:
接下来是这种思路的代码实现:
首先载入数据,有4个矩阵:the training data/labels and the test data/labels。
其中Xtr大小为50000*32*32*3,Ytr为50000*1。
Xtr, Ytr, Xte, Yte = load_CIFAR10('data/cifar10/') # a magic function we provide# flatten out all images to be one-dimensionalXtr_rows = Xtr.reshape(Xtr.shape[0], 32 * 32 * 3) # Xtr_rows becomes 50000 x 3072Xte_rows = Xte.reshape(Xte.shape[0], 32 * 32 * 3) # Xte_rows becomes 10000 x 3072
接下来是整个想法的框架:
nn = NearestNeighbor() # create a Nearest Neighbor classifier classnn.train(Xtr_rows, Ytr) # train the classifier on the training images and labelsYte_predict = nn.predict(Xte_rows) # predict labels on the test images# and now print the classification accuracy, which is the average number# of examples that are correctly predicted (i.e. label matches)print 'accuracy: %f' % ( np.mean(Yte_predict == Yte) )
类定义:
import numpy as npclass NearestNeighbor(object): def __init__(self): pass def train(self, X, y): """ X is N x D where each row is an example. Y is 1-dimension of size N """ # the nearest neighbor classifier simply remembers all the training data self.Xtr = X self.ytr = y def predict(self, X): """ X is N x D where each row is an example we wish to predict label for """ num_test = X.shape[0] # lets make sure that the output type matches the input type Ypred = np.zeros(num_test, dtype = self.ytr.dtype) # loop over all test rows for i in xrange(num_test): # find the nearest training image to the i'th test image # using the L1 distance (sum of absolute value differences) distances = np.sum(np.abs(self.Xtr - X[i,:]), axis = 1) min_index = np.argmin(distances) # get the index with smallest distance Ypred[i] = self.ytr[min_index] # predict the label of the nearest example return Ypred
L2 distance的定义:
对应的代码改为:
distances = np.sqrt(np.sum(np.square(self.Xtr - X[i,:]), axis = 1))
K-Nearest Neighbor Classifier
其想法很简单,就是用“距离”最近的k张图片来代替原来的一张图片。
例子如下图:
可见kNN算法拥有更平滑的边界。kNN在现实中比较常用,但k的选择常常有困难,以下是用validation dataset的方法来测试的:
# assume we have Xtr_rows, Ytr, Xte_rows, Yte as before# recall Xtr_rows is 50,000 x 3072 matrixXval_rows = Xtr_rows[:1000, :] # take first 1000 for validationYval = Ytr[:1000]Xtr_rows = Xtr_rows[1000:, :] # keep last 49,000 for trainYtr = Ytr[1000:]# find hyperparameters that work best on the validation setvalidation_accuracies = []for k in [1, 3, 5, 10, 20, 50, 100]: # use a particular value of k and evaluation on validation data nn = NearestNeighbor() nn.train(Xtr_rows, Ytr) # here we assume a modified NearestNeighbor class that can take a k as input Yval_predict = nn.predict(Xval_rows, k = k) acc = np.mean(Yval_predict == Yval) print 'accuracy: %f' % (acc,) # keep track of what works on the validation set validation_accuracies.append((k, acc))
除此之外,还有n-fold cross-validation方法来选择k,以下是一个例子:
随后介绍了ANN的优缺点,ANN的训练并不需要太多时间,反而在predict的时候事件复杂度高,与神经网络相反。
Summary:Applying kNN in practice
在实际中使用KNN的流程总结:
Linear Classification
- CS231n Module 1
- CS231n CNN for Visual Recognition Module (1)
- CS231n Module 0
- CS231N-1
- CS231n CNN for Visual Recognition Module (2)
- CS231n CNN for Visual Recognition Module (3) backpropagation
- CS231n--assignment 1--KNN
- cs231n笔记1
- CS231n-1-introduction
- cs231n作业1--KNN
- cs231n作业1--SVM
- cs231n作业1--softmax
- cs231n作业1--two_layer_net
- CS231N(1)-- 简介
- cs231n作业1_KNN
- cs231n
- cs231n
- CS231N
- ps -ef 输出信息详解
- 用sessionStorage获取和设置滚轮位置
- python闭包学习
- 泛型编程Selection Sort 选择排序
- iOS中自动登录的设计
- CS231n Module 1
- 使用nginx作为HTTP负载均衡
- android recovery 系统代码分析 -- 选择进入
- SQL除去值相同的记录,只保留一条
- Listview数据适配器的基类的封装
- R语言学习十
- 第一天,使用java程序链接ftp server,对共享资源操作
- sp&wp 的三板斧
- python--while循环