【Python 代码】CS231n中Softmax线性分类器、非线性分类器对比举例(含python绘图显示结果)
来源:互联网 发布:阿里云文件上传 编辑:程序博客网 时间:2024/05/21 06:47
#CS231n中线性、非线性分类器举例(Softmax)#注意其中反向传播的计算# -*- coding: utf-8 -*-import numpy as npimport matplotlib.pyplot as pltN = 100 # number of points per classD = 2 # dimensionalityK = 3 # number of classesX = np.zeros((N*K,D)) # data matrix (each row = single example)y = np.zeros(N*K, dtype='uint8') # class labelsfor j in xrange(K): ix = range(N*j,N*(j+1)) r = np.linspace(0.0,1,N) # radius t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j# lets visualize the data:plt.xlim([-1, 1])plt.ylim([-1, 1])plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)plt.show()# initialize parameters randomly# 线性分类器W = 0.01 * np.random.randn(D,K)b = np.zeros((1,K))# some hyperparametersstep_size = 1e-0reg = 1e-3 # regularization strength# gradient descent loopnum_examples = X.shape[0]for i in xrange(200): # evaluate class scores, [N x K] scores = np.dot(X, W) + b # compute the class probabilities exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K] # compute the loss: average cross-entropy loss and regularization corect_logprobs = -np.log(probs[range(num_examples),y]) data_loss = np.sum(corect_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) loss = data_loss + reg_loss if i % 10 == 0: print "iteration %d: loss %f" % (i, loss) # compute the gradient on scores dscores = probs dscores[range(num_examples),y] -= 1 dscores /= num_examples # backpropate the gradient to the parameters (W,b) dW = np.dot(X.T, dscores) db = np.sum(dscores, axis=0, keepdims=True) dW += reg*W # regularization gradient # perform a parameter update W += -step_size * dW b += -step_size * db # evaluate training set accuracyscores = np.dot(X, W) + bpredicted_class = np.argmax(scores, axis=1)print 'training accuracy: %.2f' % (np.mean(predicted_class == y))# plot the resulting classifierh = 0.02x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))Z = np.dot(np.c_[xx.ravel(), yy.ravel()], W) + bZ = np.argmax(Z, axis=1)Z = Z.reshape(xx.shape)fig = plt.figure()plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)plt.xlim(xx.min(), xx.max())plt.ylim(yy.min(), yy.max())## initialize parameters randomly# 含一个隐层的非线性分类器 使用ReLUh = 100 # size of hidden layerW = 0.01 * np.random.randn(D,h)b = np.zeros((1,h))W2 = 0.01 * np.random.randn(h,K)b2 = np.zeros((1,K))# some hyperparametersstep_size = 1e-0reg = 1e-3 # regularization strength# gradient descent loopnum_examples = X.shape[0]for i in xrange(10000): # evaluate class scores, [N x K] hidden_layer = np.maximum(0, np.dot(X, W) + b) # note, ReLU activation scores = np.dot(hidden_layer, W2) + b2 # compute the class probabilities exp_scores = np.exp(scores) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # [N x K] # compute the loss: average cross-entropy loss and regularization corect_logprobs = -np.log(probs[range(num_examples),y]) data_loss = np.sum(corect_logprobs)/num_examples reg_loss = 0.5*reg*np.sum(W*W) + 0.5*reg*np.sum(W2*W2) loss = data_loss + reg_loss if i % 1000 == 0: print "iteration %d: loss %f" % (i, loss) # compute the gradient on scores dscores = probs dscores[range(num_examples),y] -= 1 dscores /= num_examples # backpropate the gradient to the parameters # first backprop into parameters W2 and b2 dW2 = np.dot(hidden_layer.T, dscores) db2 = np.sum(dscores, axis=0, keepdims=True) # next backprop into hidden layer dhidden = np.dot(dscores, W2.T) # backprop the ReLU non-linearity dhidden[hidden_layer <= 0] = 0 # finally into W,b dW = np.dot(X.T, dhidden) db = np.sum(dhidden, axis=0, keepdims=True) # add regularization gradient contribution dW2 += reg * W2 dW += reg * W # perform a parameter update W += -step_size * dW b += -step_size * db W2 += -step_size * dW2 b2 += -step_size * db2# evaluate training set accuracyhidden_layer = np.maximum(0, np.dot(X, W) + b)scores = np.dot(hidden_layer, W2) + b2predicted_class = np.argmax(scores, axis=1)print 'training accuracy: %.2f' % (np.mean(predicted_class == y))# plot the resulting classifierh = 0.02x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))Z = np.dot(np.maximum(0, np.dot(np.c_[xx.ravel(), yy.ravel()], W) + b), W2) + b2Z = np.argmax(Z, axis=1)Z = Z.reshape(xx.shape)fig = plt.figure()plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha=0.8)plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)plt.xlim(xx.min(), xx.max())plt.ylim(yy.min(), yy.max())
阅读全文
0 0
- 【Python 代码】CS231n中Softmax线性分类器、非线性分类器对比举例(含python绘图显示结果)
- cs231n一次课程实践,python实现softmax线性分类器和二层神经网络
- cs231n-(2)线性分类器:SVM和Softmax
- 20161206#cs231n#2.线性分类器 Assignment1--SVM&Softmax
- cs231n-线性分类器
- softmax分类器 python实现
- CS231n课程笔记3.1:线性分类器(SVM,softmax)的误差函数、正则化
- cs231n assignment(1.3):softmax分类器
- cs231n:SVM线性分类器
- 线性分类器和非线性分类器
- 线性分类器、SVM、Softmax
- python 实现 softmax分类器(MNIST数据集)
- 线性、非线性分类器&数据的线性、非线性
- CS231n课程笔记2.2:线性分类器
- 搭建一个简单的Softmax分类器(基于CS231n)
- 线性SVM与SoftMax分类器
- 线性SVM与SoftMax分类器
- 线性分类器-KNN、多类SVM、Softmax
- 自顶向下,逐步求精
- (二十五)基础系列 API和集合
- XK Segments(二分)
- Coursera—machine learning(Andrew Ng)第七周编程作业
- Windows环境下mysql解压版的安装
- 【Python 代码】CS231n中Softmax线性分类器、非线性分类器对比举例(含python绘图显示结果)
- html,javaScript中怎么控制复选框checkbox的全选,全不选,以及全选中,全选按钮选中,其中一个或者多个没选,则全选按钮不被选中
- esp8266 作为 tcp server,客户端连接后再断开,这样反反复复多次,第6次就再也连不上了
- [spm操作] 什么是mask,如何做mask(未完)
- 记录一个显示C++编程环境的HTML代码
- php html_entity_decode使用总结
- 关于R在Linux服务器上生成图片中文乱码原因及解决办法
- Axure+RP+pro教程(2)
- 数据库的隔离级别与innodb引擎MVCC机制