CS231n-assignment1(作业1)-knn

来源:互联网 发布:淘宝企业账号注册 编辑:程序博客网 时间:2024/05/16 15:17

Assignment1介绍页面:http://cs231n.github.io/assignments2017/assignment1/
[TOC]

建议开始做作业前熟悉python的基本语法,以及jupyter notebook的基本使用方法,最好也熟悉numpy的API(不过也可以通过做作业学习这些东西)
完成KNN部分的作业需要补充两个文件中的代码缺失,分别是根目录下的knn.ipynb文件,一个是/cs231n/classifiers/k_nearest_neighbor.py

knn.ipynb(jupyter notebook页面完成后显示的内容)文件内容如下:

每一个In[number]:下为jupyter notebook中的一个代码块,

k-Nearest Neighbor (kNN) exerciseComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.The kNN classifier consists of two stages:During training, the classifier takes the training data and simply remembers itDuring testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examplesThe value of k is cross-validatedIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.In [1]:# Run some setup code for this notebook.import randomimport numpy as npfrom cs231n.data_utils import load_CIFAR10import matplotlib.pyplot as plt​from __future__ import print_function​# This is a bit of magic to make matplotlib figures appear inline in the notebook# rather than in a new window.%matplotlib inlineplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plotsplt.rcParams['image.interpolation'] = 'nearest'plt.rcParams['image.cmap'] = 'gray'# Some more magic so that the notebook will reload external python modules;# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython%load_ext autoreload%autoreload 2In [2]:# Load the raw CIFAR-10 data.cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)​# As a sanity check, we print out the size of the training and test data.print('Training data shape: ', X_train.shape)print('Training labels shape: ', y_train.shape)print('Test data shape: ', X_test.shape)print('Test labels shape: ', y_test.shape)Training data shape:  (50000, 32, 32, 3)Training labels shape:  (50000,)Test data shape:  (10000, 32, 32, 3)Test labels shape:  (10000,)In [3]:# Visualize some examples from the dataset.# We show a few examples of training images from each class.classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']num_classes = len(classes)samples_per_class = 7for y, cls in enumerate(classes):    idxs = np.flatnonzero(y_train == y)    idxs = np.random.choice(idxs, samples_per_class, replace=False)    for i, idx in enumerate(idxs):        plt_idx = i * num_classes + y + 1        plt.subplot(samples_per_class, num_classes, plt_idx)        plt.imshow(X_train[idx].astype('uint8'))        plt.axis('off')        if i == 0:            plt.title(cls)plt.show()In [4]:# Subsample the data for more efficient code execution in this exercisenum_training = 5000mask = list(range(num_training))X_train = X_train[mask]y_train = y_train[mask]​num_test = 500mask = list(range(num_test))X_test = X_test[mask]y_test = y_test[mask]In [5]:# Reshape the image data into rowsX_train = np.reshape(X_train, (X_train.shape[0], -1))X_test = np.reshape(X_test, (X_test.shape[0], -1))print(X_train.shape, X_test.shape)(5000, 3072) (500, 3072)In [6]:from cs231n.classifiers import KNearestNeighbor​# Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor()classifier.train(X_train, y_train)We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:First we must compute the distances between all test examples and all train examples.Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.In [7]:# Open cs231n/classifiers/k_nearest_neighbor.py and implement# compute_distances_two_loops.# Test your implementation:dists = classifier.compute_distances_two_loops(X_test)print(dists.shape)(500, 5000)In [8]:# We can visualize the distance matrix: each row is a single test example and# its distances to training examplesplt.imshow(dists, interpolation='none')plt.show()Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)What in the data is the cause behind the distinctly bright rows?What causes the columns?Your Answer: fill this in.Bright rows:该行所对应的测试图片与所有的训练图片的L2距离都比较大 This row's test image's L2 distance is lage for all training image Bright columns:该列所对应的训练图片与所有的测试图片的L2距离都比较大In [9]:# Now implement the function predict_labels and run the code below:# We use k = 1 (which is Nearest Neighbor).y_test_pred = classifier.predict_labels(dists, k=1)​# Compute and print the fraction of correctly predicted examplesnum_correct = np.sum(y_test_pred == y_test)accuracy = float(num_correct) / num_testprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))Got 137 / 500 correct => accuracy: 0.274000You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:In [10]:y_test_pred = classifier.predict_labels(dists, k=5)num_correct = np.sum(y_test_pred == y_test)accuracy = float(num_correct) / num_testprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))Got 139 / 500 correct => accuracy: 0.278000You should expect to see a slightly better performance than with k = 1.In [11]:# Now lets speed up distance matrix computation by using partial vectorization# with one loop. Implement the function compute_distances_one_loop and run the# code below:dists_one = classifier.compute_distances_one_loop(X_test)​# To ensure that our vectorized implementation is correct, we make sure that it# agrees with the naive implementation. There are many ways to decide whether# two matrices are similar; one of the simplest is the Frobenius norm. In case# you haven't seen it before, the Frobenius norm of two matrices is the square# root of the squared sum of differences of all elements; in other words, reshape# the matrices into vectors and compute the Euclidean distance between them.difference = np.linalg.norm(dists - dists_one, ord='fro')print('Difference was: %f' % (difference, ))if difference < 0.001:    print('Good! The distance matrices are the same')else:    print('Uh-oh! The distance matrices are different')Difference was: 0.000000Good! The distance matrices are the sameIn [12]:# Now implement the fully vectorized version inside compute_distances_no_loops# and run the codedists_two = classifier.compute_distances_no_loops(X_test)​# check that the distance matrix agrees with the one we computed before:difference = np.linalg.norm(dists - dists_two, ord='fro')print('Difference was: %f' % (difference, ))if difference < 0.001:    print('Good! The distance matrices are the same')else:    print('Uh-oh! The distance matrices are different')Difference was: 0.000000Good! The distance matrices are the sameIn [13]:# Let's compare how fast the implementations aredef time_function(f, *args):    """    Call a function f with args and return the time (in seconds) that it took to execute.    """    import time    tic = time.time()    f(*args)    toc = time.time()    return toc - tic​two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)print('Two loop version took %f seconds' % two_loop_time)​one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)print('One loop version took %f seconds' % one_loop_time)​no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)print('No loop version took %f seconds' % no_loop_time)​# you should see significantly faster performance with the fully vectorized implementationTwo loop version took 28.623378 secondsOne loop version took 53.359165 secondsNo loop version took 0.167158 secondsCross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.In [14]:num_folds = 50 #交叉验证集的数量k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] #k值的选取​X_train_folds = []#混用大小写让我很难受emmmmy_train_folds = []################################################################################# TODO:                                                                        ## Split up the training data into folds. After splitting, X_train_folds and    ## y_train_folds should each be lists of length num_folds, where                ## y_train_folds[i] is the label vector for the points in X_train_folds[i].     ## Hint: Look up the numpy array_split function.                                #################################################################################X_train_folds = np.array_split(X_train,num_folds)y_train_folds = np.array_split(y_train,num_folds)#################################################################################                                 END OF YOUR CODE                             ################################################################################## A dictionary holding the accuracies for different values of k that we find# when running cross-validation. After running cross-validation,# k_to_accuracies[k] should be a list of length num_folds giving the different# accuracy values that we found when using that value of k.k_to_accuracies = {}​​################################################################################# TODO:                                                                        ## Perform k-fold cross validation to find the best value of k. For each        ## possible value of k, run the k-nearest-neighbor algorithm num_folds times,   ## where in each case you use all but one of the folds as training data and the ## last fold as a validation set. Store the accuracies for all fold and all     ## values of k in the k_to_accuracies dictionary.                               #################################################################################for k in k_choices:    classifier = KNearestNeighbor()    k_to_accuracies[k]=[]    for i in range(num_folds):        tmp=list(range(i))+list(range(i+1,num_folds))        train_data = np.concatenate([X_train_folds[j] for j in tmp])        train_label = np.concatenate([y_train_folds[j] for j in tmp])        classifier.train(train_data,train_label)        test = X_train_folds[i]        dists = classifier.compute_distances_no_loops(test)        y_test_pred = classifier.predict_labels(dists, k)        num_correct = np.sum(y_test_pred == y_train_folds[i])        num_test_train=np.shape(X_train_folds[i])[0]        accuracy = float(num_correct) / num_test_train        k_to_accuracies[k].append(accuracy)#################################################################################                                 END OF YOUR CODE                             ################################################################################## Print out the computed accuraciesfor k in sorted(k_to_accuracies):    for accuracy in k_to_accuracies[k]:        print('k = %d, accuracy = %f' % (k, accuracy))k = 1, accuracy = 0.230000k = 1, accuracy = 0.310000k = 1, accuracy = 0.270000k = 1, accuracy = 0.210000k = 1, accuracy = 0.290000k = 1, accuracy = 0.310000k = 1, accuracy = 0.280000k = 1, accuracy = 0.290000k = 1, accuracy = 0.240000k = 1, accuracy = 0.260000k = 1, accuracy = 0.250000k = 1, accuracy = 0.200000k = 1, accuracy = 0.270000k = 1, accuracy = 0.270000k = 1, accuracy = 0.240000k = 1, accuracy = 0.190000k = 1, accuracy = 0.320000k = 1, accuracy = 0.270000k = 1, accuracy = 0.250000k = 1, accuracy = 0.340000k = 1, accuracy = 0.190000k = 1, accuracy = 0.320000k = 1, accuracy = 0.280000k = 1, accuracy = 0.300000k = 1, accuracy = 0.280000k = 1, accuracy = 0.280000k = 1, accuracy = 0.280000k = 1, accuracy = 0.210000k = 1, accuracy = 0.300000k = 1, accuracy = 0.270000k = 1, accuracy = 0.270000k = 1, accuracy = 0.320000k = 1, accuracy = 0.280000k = 1, accuracy = 0.260000k = 1, accuracy = 0.220000k = 1, accuracy = 0.320000k = 1, accuracy = 0.260000k = 1, accuracy = 0.340000k = 1, accuracy = 0.220000k = 1, accuracy = 0.280000k = 1, accuracy = 0.290000k = 1, accuracy = 0.210000k = 1, accuracy = 0.400000k = 1, accuracy = 0.280000k = 1, accuracy = 0.210000k = 1, accuracy = 0.270000k = 1, accuracy = 0.270000k = 1, accuracy = 0.340000k = 1, accuracy = 0.310000k = 1, accuracy = 0.250000k = 3, accuracy = 0.250000k = 3, accuracy = 0.260000k = 3, accuracy = 0.220000k = 3, accuracy = 0.230000k = 3, accuracy = 0.270000k = 3, accuracy = 0.260000k = 3, accuracy = 0.260000k = 3, accuracy = 0.250000k = 3, accuracy = 0.260000k = 3, accuracy = 0.260000k = 3, accuracy = 0.290000k = 3, accuracy = 0.220000k = 3, accuracy = 0.260000k = 3, accuracy = 0.250000k = 3, accuracy = 0.240000k = 3, accuracy = 0.250000k = 3, accuracy = 0.290000k = 3, accuracy = 0.220000k = 3, accuracy = 0.240000k = 3, accuracy = 0.280000k = 3, accuracy = 0.190000k = 3, accuracy = 0.300000k = 3, accuracy = 0.220000k = 3, accuracy = 0.300000k = 3, accuracy = 0.240000k = 3, accuracy = 0.250000k = 3, accuracy = 0.260000k = 3, accuracy = 0.220000k = 3, accuracy = 0.270000k = 3, accuracy = 0.250000k = 3, accuracy = 0.270000k = 3, accuracy = 0.300000k = 3, accuracy = 0.290000k = 3, accuracy = 0.300000k = 3, accuracy = 0.200000k = 3, accuracy = 0.250000k = 3, accuracy = 0.300000k = 3, accuracy = 0.270000k = 3, accuracy = 0.250000k = 3, accuracy = 0.210000k = 3, accuracy = 0.240000k = 3, accuracy = 0.150000k = 3, accuracy = 0.320000k = 3, accuracy = 0.300000k = 3, accuracy = 0.170000k = 3, accuracy = 0.270000k = 3, accuracy = 0.220000k = 3, accuracy = 0.390000k = 3, accuracy = 0.280000k = 3, accuracy = 0.270000k = 5, accuracy = 0.250000k = 5, accuracy = 0.270000k = 5, accuracy = 0.190000k = 5, accuracy = 0.210000k = 5, accuracy = 0.280000k = 5, accuracy = 0.300000k = 5, accuracy = 0.280000k = 5, accuracy = 0.260000k = 5, accuracy = 0.280000k = 5, accuracy = 0.240000k = 5, accuracy = 0.300000k = 5, accuracy = 0.250000k = 5, accuracy = 0.300000k = 5, accuracy = 0.300000k = 5, accuracy = 0.220000k = 5, accuracy = 0.280000k = 5, accuracy = 0.250000k = 5, accuracy = 0.240000k = 5, accuracy = 0.270000k = 5, accuracy = 0.300000k = 5, accuracy = 0.180000k = 5, accuracy = 0.340000k = 5, accuracy = 0.270000k = 5, accuracy = 0.330000k = 5, accuracy = 0.280000k = 5, accuracy = 0.250000k = 5, accuracy = 0.290000k = 5, accuracy = 0.230000k = 5, accuracy = 0.290000k = 5, accuracy = 0.260000k = 5, accuracy = 0.320000k = 5, accuracy = 0.310000k = 5, accuracy = 0.270000k = 5, accuracy = 0.340000k = 5, accuracy = 0.220000k = 5, accuracy = 0.330000k = 5, accuracy = 0.310000k = 5, accuracy = 0.280000k = 5, accuracy = 0.250000k = 5, accuracy = 0.210000k = 5, accuracy = 0.240000k = 5, accuracy = 0.190000k = 5, accuracy = 0.320000k = 5, accuracy = 0.270000k = 5, accuracy = 0.190000k = 5, accuracy = 0.260000k = 5, accuracy = 0.260000k = 5, accuracy = 0.410000k = 5, accuracy = 0.320000k = 5, accuracy = 0.280000k = 8, accuracy = 0.270000k = 8, accuracy = 0.300000k = 8, accuracy = 0.200000k = 8, accuracy = 0.250000k = 8, accuracy = 0.280000k = 8, accuracy = 0.330000k = 8, accuracy = 0.270000k = 8, accuracy = 0.230000k = 8, accuracy = 0.220000k = 8, accuracy = 0.270000k = 8, accuracy = 0.330000k = 8, accuracy = 0.250000k = 8, accuracy = 0.350000k = 8, accuracy = 0.330000k = 8, accuracy = 0.300000k = 8, accuracy = 0.260000k = 8, accuracy = 0.280000k = 8, accuracy = 0.220000k = 8, accuracy = 0.240000k = 8, accuracy = 0.230000k = 8, accuracy = 0.200000k = 8, accuracy = 0.330000k = 8, accuracy = 0.240000k = 8, accuracy = 0.280000k = 8, accuracy = 0.340000k = 8, accuracy = 0.260000k = 8, accuracy = 0.320000k = 8, accuracy = 0.220000k = 8, accuracy = 0.310000k = 8, accuracy = 0.280000k = 8, accuracy = 0.270000k = 8, accuracy = 0.290000k = 8, accuracy = 0.260000k = 8, accuracy = 0.340000k = 8, accuracy = 0.260000k = 8, accuracy = 0.360000k = 8, accuracy = 0.310000k = 8, accuracy = 0.320000k = 8, accuracy = 0.280000k = 8, accuracy = 0.190000k = 8, accuracy = 0.240000k = 8, accuracy = 0.260000k = 8, accuracy = 0.300000k = 8, accuracy = 0.360000k = 8, accuracy = 0.230000k = 8, accuracy = 0.260000k = 8, accuracy = 0.230000k = 8, accuracy = 0.390000k = 8, accuracy = 0.310000k = 8, accuracy = 0.260000k = 10, accuracy = 0.220000k = 10, accuracy = 0.280000k = 10, accuracy = 0.220000k = 10, accuracy = 0.220000k = 10, accuracy = 0.310000k = 10, accuracy = 0.330000k = 10, accuracy = 0.270000k = 10, accuracy = 0.230000k = 10, accuracy = 0.260000k = 10, accuracy = 0.260000k = 10, accuracy = 0.330000k = 10, accuracy = 0.230000k = 10, accuracy = 0.380000k = 10, accuracy = 0.280000k = 10, accuracy = 0.290000k = 10, accuracy = 0.250000k = 10, accuracy = 0.330000k = 10, accuracy = 0.240000k = 10, accuracy = 0.280000k = 10, accuracy = 0.260000k = 10, accuracy = 0.200000k = 10, accuracy = 0.380000k = 10, accuracy = 0.300000k = 10, accuracy = 0.300000k = 10, accuracy = 0.320000k = 10, accuracy = 0.230000k = 10, accuracy = 0.310000k = 10, accuracy = 0.200000k = 10, accuracy = 0.310000k = 10, accuracy = 0.250000k = 10, accuracy = 0.260000k = 10, accuracy = 0.280000k = 10, accuracy = 0.250000k = 10, accuracy = 0.330000k = 10, accuracy = 0.270000k = 10, accuracy = 0.320000k = 10, accuracy = 0.280000k = 10, accuracy = 0.260000k = 10, accuracy = 0.300000k = 10, accuracy = 0.180000k = 10, accuracy = 0.240000k = 10, accuracy = 0.220000k = 10, accuracy = 0.320000k = 10, accuracy = 0.340000k = 10, accuracy = 0.190000k = 10, accuracy = 0.300000k = 10, accuracy = 0.230000k = 10, accuracy = 0.440000k = 10, accuracy = 0.320000k = 10, accuracy = 0.300000k = 12, accuracy = 0.200000k = 12, accuracy = 0.300000k = 12, accuracy = 0.230000k = 12, accuracy = 0.240000k = 12, accuracy = 0.330000k = 12, accuracy = 0.340000k = 12, accuracy = 0.230000k = 12, accuracy = 0.190000k = 12, accuracy = 0.270000k = 12, accuracy = 0.270000k = 12, accuracy = 0.310000k = 12, accuracy = 0.210000k = 12, accuracy = 0.350000k = 12, accuracy = 0.320000k = 12, accuracy = 0.340000k = 12, accuracy = 0.260000k = 12, accuracy = 0.330000k = 12, accuracy = 0.220000k = 12, accuracy = 0.290000k = 12, accuracy = 0.290000k = 12, accuracy = 0.200000k = 12, accuracy = 0.350000k = 12, accuracy = 0.280000k = 12, accuracy = 0.330000k = 12, accuracy = 0.350000k = 12, accuracy = 0.230000k = 12, accuracy = 0.330000k = 12, accuracy = 0.210000k = 12, accuracy = 0.280000k = 12, accuracy = 0.280000k = 12, accuracy = 0.290000k = 12, accuracy = 0.280000k = 12, accuracy = 0.270000k = 12, accuracy = 0.300000k = 12, accuracy = 0.260000k = 12, accuracy = 0.320000k = 12, accuracy = 0.280000k = 12, accuracy = 0.260000k = 12, accuracy = 0.290000k = 12, accuracy = 0.170000k = 12, accuracy = 0.240000k = 12, accuracy = 0.220000k = 12, accuracy = 0.320000k = 12, accuracy = 0.330000k = 12, accuracy = 0.150000k = 12, accuracy = 0.290000k = 12, accuracy = 0.250000k = 12, accuracy = 0.400000k = 12, accuracy = 0.300000k = 12, accuracy = 0.270000k = 15, accuracy = 0.190000k = 15, accuracy = 0.290000k = 15, accuracy = 0.200000k = 15, accuracy = 0.230000k = 15, accuracy = 0.320000k = 15, accuracy = 0.330000k = 15, accuracy = 0.280000k = 15, accuracy = 0.250000k = 15, accuracy = 0.280000k = 15, accuracy = 0.290000k = 15, accuracy = 0.250000k = 15, accuracy = 0.250000k = 15, accuracy = 0.310000k = 15, accuracy = 0.300000k = 15, accuracy = 0.320000k = 15, accuracy = 0.240000k = 15, accuracy = 0.350000k = 15, accuracy = 0.260000k = 15, accuracy = 0.270000k = 15, accuracy = 0.220000k = 15, accuracy = 0.220000k = 15, accuracy = 0.370000k = 15, accuracy = 0.280000k = 15, accuracy = 0.320000k = 15, accuracy = 0.350000k = 15, accuracy = 0.210000k = 15, accuracy = 0.330000k = 15, accuracy = 0.180000k = 15, accuracy = 0.260000k = 15, accuracy = 0.270000k = 15, accuracy = 0.260000k = 15, accuracy = 0.270000k = 15, accuracy = 0.250000k = 15, accuracy = 0.270000k = 15, accuracy = 0.270000k = 15, accuracy = 0.320000k = 15, accuracy = 0.330000k = 15, accuracy = 0.290000k = 15, accuracy = 0.270000k = 15, accuracy = 0.170000k = 15, accuracy = 0.280000k = 15, accuracy = 0.190000k = 15, accuracy = 0.350000k = 15, accuracy = 0.320000k = 15, accuracy = 0.150000k = 15, accuracy = 0.280000k = 15, accuracy = 0.240000k = 15, accuracy = 0.430000k = 15, accuracy = 0.320000k = 15, accuracy = 0.280000k = 20, accuracy = 0.200000k = 20, accuracy = 0.310000k = 20, accuracy = 0.210000k = 20, accuracy = 0.220000k = 20, accuracy = 0.290000k = 20, accuracy = 0.310000k = 20, accuracy = 0.270000k = 20, accuracy = 0.260000k = 20, accuracy = 0.280000k = 20, accuracy = 0.290000k = 20, accuracy = 0.280000k = 20, accuracy = 0.230000k = 20, accuracy = 0.350000k = 20, accuracy = 0.310000k = 20, accuracy = 0.330000k = 20, accuracy = 0.210000k = 20, accuracy = 0.310000k = 20, accuracy = 0.280000k = 20, accuracy = 0.320000k = 20, accuracy = 0.240000k = 20, accuracy = 0.200000k = 20, accuracy = 0.310000k = 20, accuracy = 0.270000k = 20, accuracy = 0.320000k = 20, accuracy = 0.320000k = 20, accuracy = 0.300000k = 20, accuracy = 0.290000k = 20, accuracy = 0.240000k = 20, accuracy = 0.270000k = 20, accuracy = 0.300000k = 20, accuracy = 0.260000k = 20, accuracy = 0.270000k = 20, accuracy = 0.250000k = 20, accuracy = 0.290000k = 20, accuracy = 0.260000k = 20, accuracy = 0.330000k = 20, accuracy = 0.320000k = 20, accuracy = 0.310000k = 20, accuracy = 0.250000k = 20, accuracy = 0.170000k = 20, accuracy = 0.240000k = 20, accuracy = 0.240000k = 20, accuracy = 0.380000k = 20, accuracy = 0.340000k = 20, accuracy = 0.150000k = 20, accuracy = 0.260000k = 20, accuracy = 0.250000k = 20, accuracy = 0.340000k = 20, accuracy = 0.320000k = 20, accuracy = 0.300000k = 50, accuracy = 0.210000k = 50, accuracy = 0.330000k = 50, accuracy = 0.250000k = 50, accuracy = 0.240000k = 50, accuracy = 0.260000k = 50, accuracy = 0.330000k = 50, accuracy = 0.290000k = 50, accuracy = 0.260000k = 50, accuracy = 0.240000k = 50, accuracy = 0.280000k = 50, accuracy = 0.240000k = 50, accuracy = 0.250000k = 50, accuracy = 0.340000k = 50, accuracy = 0.300000k = 50, accuracy = 0.290000k = 50, accuracy = 0.240000k = 50, accuracy = 0.290000k = 50, accuracy = 0.240000k = 50, accuracy = 0.260000k = 50, accuracy = 0.250000k = 50, accuracy = 0.230000k = 50, accuracy = 0.290000k = 50, accuracy = 0.270000k = 50, accuracy = 0.270000k = 50, accuracy = 0.330000k = 50, accuracy = 0.290000k = 50, accuracy = 0.280000k = 50, accuracy = 0.230000k = 50, accuracy = 0.300000k = 50, accuracy = 0.290000k = 50, accuracy = 0.260000k = 50, accuracy = 0.260000k = 50, accuracy = 0.270000k = 50, accuracy = 0.290000k = 50, accuracy = 0.230000k = 50, accuracy = 0.380000k = 50, accuracy = 0.240000k = 50, accuracy = 0.290000k = 50, accuracy = 0.250000k = 50, accuracy = 0.200000k = 50, accuracy = 0.240000k = 50, accuracy = 0.250000k = 50, accuracy = 0.360000k = 50, accuracy = 0.360000k = 50, accuracy = 0.140000k = 50, accuracy = 0.260000k = 50, accuracy = 0.270000k = 50, accuracy = 0.360000k = 50, accuracy = 0.260000k = 50, accuracy = 0.270000k = 100, accuracy = 0.200000k = 100, accuracy = 0.260000k = 100, accuracy = 0.230000k = 100, accuracy = 0.250000k = 100, accuracy = 0.230000k = 100, accuracy = 0.320000k = 100, accuracy = 0.300000k = 100, accuracy = 0.270000k = 100, accuracy = 0.240000k = 100, accuracy = 0.250000k = 100, accuracy = 0.250000k = 100, accuracy = 0.260000k = 100, accuracy = 0.310000k = 100, accuracy = 0.300000k = 100, accuracy = 0.270000k = 100, accuracy = 0.220000k = 100, accuracy = 0.260000k = 100, accuracy = 0.270000k = 100, accuracy = 0.300000k = 100, accuracy = 0.240000k = 100, accuracy = 0.160000k = 100, accuracy = 0.280000k = 100, accuracy = 0.240000k = 100, accuracy = 0.260000k = 100, accuracy = 0.310000k = 100, accuracy = 0.260000k = 100, accuracy = 0.230000k = 100, accuracy = 0.240000k = 100, accuracy = 0.290000k = 100, accuracy = 0.280000k = 100, accuracy = 0.250000k = 100, accuracy = 0.240000k = 100, accuracy = 0.260000k = 100, accuracy = 0.350000k = 100, accuracy = 0.250000k = 100, accuracy = 0.360000k = 100, accuracy = 0.220000k = 100, accuracy = 0.270000k = 100, accuracy = 0.270000k = 100, accuracy = 0.200000k = 100, accuracy = 0.240000k = 100, accuracy = 0.250000k = 100, accuracy = 0.380000k = 100, accuracy = 0.370000k = 100, accuracy = 0.110000k = 100, accuracy = 0.260000k = 100, accuracy = 0.260000k = 100, accuracy = 0.320000k = 100, accuracy = 0.260000k = 100, accuracy = 0.280000In [15]:# plot the raw observationsfor k in k_choices:    accuracies = k_to_accuracies[k]    plt.scatter([k] * len(accuracies), accuracies)​# plot the trend line with error bars that correspond to standard deviationaccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)plt.title('Cross-validation on k')plt.xlabel('k')plt.ylabel('Cross-validation accuracy')plt.show()In [17]:# Based on the cross-validation results above, choose the best value for k,   # retrain the classifier using all the training data, and test it on the test# data. You should be able to get above 28% accuracy on the test data.best_k = 10​classifier = KNearestNeighbor()classifier.train(X_train, y_train)y_test_pred = classifier.predict(X_test, k=best_k)​# Compute and display the accuracynum_correct = np.sum(y_test_pred == y_test)accuracy = float(num_correct) / num_testprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))Got 141 / 500 correct => accuracy: 0.282000

k_nearest_neighbor.py文件内容(源代码)如下:

import numpy as npfrom past.builtins import xrangeclass KNearestNeighbor(object):  """ a kNN classifier with L2 distance """  def __init__(self):    pass  def train(self, X, y):    """    Train the classifier. For k-nearest neighbors this is just     memorizing the training data.    Inputs:    - X: A numpy array of shape (num_train, D) containing the training data      consisting of num_train samples each of dimension D.    - y: A numpy array of shape (N,) containing the training labels, where         y[i] is the label for X[i].    """    self.X_train = X    self.y_train = y  def predict(self, X, k=1, num_loops=0):    """    Predict labels for test data using this classifier.    Inputs:    - X: A numpy array of shape (num_test, D) containing test data consisting         of num_test samples each of dimension D.    - k: The number of nearest neighbors that vote for the predicted labels.    - num_loops: Determines which implementation to use to compute distances      between training points and testing points.    Returns:    - y: A numpy array of shape (num_test,) containing predicted labels for the      test data, where y[i] is the predicted label for the test point X[i].      """    if num_loops == 0:      dists = self.compute_distances_no_loops(X)    elif num_loops == 1:      dists = self.compute_distances_one_loop(X)    elif num_loops == 2:      dists = self.compute_distances_two_loops(X)    else:      raise ValueError('Invalid value %d for num_loops' % num_loops)    return self.predict_labels(dists, k=k)  def compute_distances_two_loops(self, X):    """    Compute the distance between each test point in X and each training point    in self.X_train using a nested loop over both the training data and the     test data.    Inputs:    - X: A numpy array of shape (num_test, D) containing test data.    Returns:    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]      is the Euclidean distance between the ith test point and the jth training      point.    """    num_test = X.shape[0]    num_train = self.X_train.shape[0]    dists = np.zeros((num_test, num_train))    for i in xrange(num_test):      for j in xrange(num_train):        #####################################################################        # TODO:                                                             #        # Compute the l2 distance between the ith test point and the jth    #        # training point, and store the result in dists[i, j]. You should   #        # not use a loop over dimension.                                    #        #####################################################################        dists[i][j] = np.sqrt(np.sum(np.square(self.X_train[j,:] - X[i,:])))        #####################################################################        #                       END OF YOUR CODE                            #        #####################################################################    return dists  def compute_distances_one_loop(self, X):    """    Compute the distance between each test point in X and each training point    in self.X_train using a single loop over the test data.    Input / Output: Same as compute_distances_two_loops    """    num_test = X.shape[0]    num_train = self.X_train.shape[0]    dists = np.zeros((num_test, num_train))    for i in xrange(num_test):      #######################################################################      # TODO:                                                               #      # Compute the l2 distance between the ith test point and all training #      # points, and store the result in dists[i, :].                        #      #######################################################################      dists[i,:]=np.sqrt(np.sum(np.square(self.X_train-X[i,:]),axis=1))      #######################################################################      #                         END OF YOUR CODE                            #      #######################################################################    return dists  def compute_distances_no_loops(self, X):    """    Compute the distance between each test point in X and each training point    in self.X_train using no explicit loops.    Input / Output: Same as compute_distances_two_loops    """    num_test = X.shape[0]    num_train = self.X_train.shape[0]    dists = np.zeros((num_test, num_train))     #########################################################################    # TODO:                                                                 #    # Compute the l2 distance between all test points and all training      #    # points without using any explicit loops, and store the result in      #    # dists.                                                                #    #                                                                       #    # You should implement this function using only basic array operations; #    # in particular you should not use functions from scipy.                #    #                                                                       #    # HINT: Try to formulate the l2 distance using matrix multiplication    #    #       and two broadcast sums.                                         #    #########################################################################    dists = np.multiply(np.dot(X,self.X_train.T),-2)     dists = np.add(dists,np.sum(np.square(X),axis=1,keepdims = True))     dists = np.add(dists,np.sum(np.square(self.X_train),axis=1))     dists = np.sqrt(dists)    #########################################################################    #                         END OF YOUR CODE                              #    #########################################################################    return dists  def predict_labels(self, dists, k=1):    """    Given a matrix of distances between test points and training points,    predict a label for each test point.    Inputs:    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]      gives the distance betwen the ith test point and the jth training point.    Returns:    - y: A numpy array of shape (num_test,) containing predicted labels for the      test data, where y[i] is the predicted label for the test point X[i].      """    num_test = dists.shape[0]    y_pred = np.zeros(num_test)    for i in xrange(num_test):      # A list of length k storing the labels of the k nearest neighbors to      # the ith test point.      closest_y = []      #########################################################################      # TODO:                                                                 #      # Use the distance matrix to find the k nearest neighbors of the ith    #      # testing point, and use self.y_train to find the labels of these       #      # neighbors. Store these labels in closest_y.                           #      # Hint: Look up the function numpy.argsort.                             #      #########################################################################      closest_y = self.y_train[np.argsort(dists[i,:])[:k]]      #########################################################################      # TODO:                                                                 #      # Now that you have found the labels of the k nearest neighbors, you    #      # need to find the most common label in the list closest_y of labels.   #      # Store this label in y_pred[i]. Break ties by choosing the smaller     #      # label.                                                                #      #########################################################################      y_pred[i] = np.argmax(np.bincount(closest_y))      #########################################################################      #                           END OF YOUR CODE                            #       #########################################################################    return y_pred
原创粉丝点击