神经网络与深度学习学习笔记:numpy基础

来源:互联网 发布:英雄联盟匹配算法 编辑:程序博客网 时间:2024/06/07 20:27
import numpy# def sigmoid(x):#     return 1/(1+numpy.exp(-x))# print(sigmoid(3))# print(sigmoid(numpy.array([1,2,3])))def sigmoid_derivative(x):    s=1/(1+numpy.exp(-x))    ds=s*(1-s)    return dsprint(sigmoid_derivative(numpy.array([1,2,3])))#产生随机矩阵# a=numpy.random.randn(2,4)# print(a)#矩阵乘法# a=numpy.random.randn(4,3)# b=numpy.random.randn(3,4)# print(numpy.dot(a,b))#列求和与行求和# A=numpy.array([[56,0,4.4,68],#                [1.2,104,52,8],#                [1.8,135,99,0.9]])# cal=A.sum(axis=0)      #axis=0表示列求和,axis=1表示行求和# percentage=A/cal*100# print(percentage)#广播,只适用于行向量或列向量# B=numpy.array([[1,2,3],#                [4,5,6],#                [7,8,9],#                [10,11,12]])# C=numpy.array([[1],#                [2],#                [3],#                [4]])# print(B*100)# print(B/C)

向量扁平化

对于图像文件,其数值往往是一个三维向量(长X宽X颜色位深),对运算而言,向量的扁平化就显得很重要。以下代码将一个三维矩阵转换成了一维向量:

import numpydef image2vector(image):    return image.reshape((image.shape[0]*image.shape[1]*image.shape[2],1))    #shape函数是取矩阵的维度,三维矩阵可取0,1,2image = numpy.array([[[0.678,0.293],                      [0.907,0.528],                      [0.421,0.450]],                     [[0.928,0.966],                      [0.853,0.523],                      [0.199,0.274]],                     [[0.606,0.005],                      [0.108,0.499],                      [0.341,0.946]]])print(image2vector(image))

向量归一化

import numpy#向量标准化def normalizeRows(x):    x_norm=numpy.linalg.norm(x,axis=1,keepdims=True)    #axis=1:水平运算    x=x/x_norm    return xx=numpy.array([[0,3,4],               [1,6,4]])print(normalizeRows(x))

softmax

图片来自《一天搞懂深度学习》:

代码实现:

import numpydef softmax(x):    x_exp=numpy.exp(x)    x_sum=x_exp.sum(axis=1).reshape(2,1)    s=x_exp/x_sum    return sx=numpy.array([[9,2,5,0,0],               [7,5,0,0,0]])print(softmax(x))

均方损失

import numpydef L1(y_hat,y):    loss=abs(y-y_hat).reshape(1,y.shape[0]).sum(axis=1)    return lossy_hat=numpy.array([0.9,0.2,0.1,0.4,0.9])y=numpy.array([1,0,0,1,1])print(L1(y_hat,y))def L2(y_hat,y):    loss=numpy.dot(y-y_hat,y-y_hat)    return lossprint(L2(y_hat,y))

其中L1与L2分别为:
L1(y^,y)=i=0m|y(i)y^(i)|
L2(y^,y)=i=0m(y(i)y^(i))2

阅读全文
0 0
原创粉丝点击