【数据平台】Pytorch库初识
来源:互联网 发布:linux使用gdb 编辑:程序博客网 时间:2024/06/03 20:09
PyTorch是使用GPU和CPU优化的深度学习张量库。
1、安装,参考官网:http://pytorch.org/
conda install pytorch torchvision -c pytorch
2、认识,参考:
https://github.com/yunjey/pytorch-tutorial
https://github.com/jcjohnson/pytorch-examples
http://pytorch-cn.readthedocs.io/zh/latest/
3、demo:
# Code in file tensor/two_layer_net_tensor.pyimport torchdtype = torch.FloatTensor# dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU# N is batch size; D_in is input dimension;# H is hidden dimension; D_out is output dimension.N, D_in, H, D_out = 64, 1000, 100, 10# Create random input and output datax = torch.randn(N, D_in).type(dtype)y = torch.randn(N, D_out).type(dtype)# Randomly initialize weightsw1 = torch.randn(D_in, H).type(dtype)w2 = torch.randn(H, D_out).type(dtype)learning_rate = 1e-6for t in range(500): # Forward pass: compute predicted y h = x.mm(w1) h_relu = h.clamp(min=0) y_pred = h_relu.mm(w2) # Compute and print loss loss = (y_pred - y).pow(2).sum() print(t, loss) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.t().mm(grad_y_pred) grad_h_relu = grad_y_pred.mm(w2.t()) grad_h = grad_h_relu.clone() grad_h[h < 0] = 0 grad_w1 = x.t().mm(grad_h) # Update weights using gradient descent w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2结果如下:
而同样np跑出来的结果是:
代码如下:
# Code in file tensor/two_layer_net_numpy.pyimport numpy as np# N is batch size; D_in is input dimension;# H is hidden dimension; D_out is output dimension.N, D_in, H, D_out = 64, 1000, 100, 10# Create random input and output datax = np.random.randn(N, D_in)y = np.random.randn(N, D_out)# Randomly initialize weightsw1 = np.random.randn(D_in, H)w2 = np.random.randn(H, D_out)learning_rate = 1e-6for t in range(500): # Forward pass: compute predicted y h = x.dot(w1) h_relu = np.maximum(h, 0) y_pred = h_relu.dot(w2) # Compute and print loss loss = np.square(y_pred - y).sum() print(t, loss) # Backprop to compute gradients of w1 and w2 with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_w2 = h_relu.T.dot(grad_y_pred) grad_h_relu = grad_y_pred.dot(w2.T) grad_h = grad_h_relu.copy() grad_h[h < 0] = 0 grad_w1 = x.T.dot(grad_h) # Update weights w1 -= learning_rate * grad_w1 w2 -= learning_rate * grad_w2
根据实际应用场景,后续可深入学习,重点是gpu了。
阅读全文
0 0
- 【数据平台】Pytorch库初识
- 【数据平台】python语言NLP库Gensim初识
- pytorch学习-数据可视化
- pytorch 数据集图片显示
- PyTorch
- PyTorch
- PyTorch
- pytorch
- pytorch
- Pytorch
- 使用pytorch准备自己的数据
- 初识跨平台
- 初识Arduino平台
- 初识Android平台
- 初识.NET平台
- 初识七巧板平台
- 初识Android系统平台
- python数据分析 -- numpy库初识
- spark cannot resolve "read"
- ScrollView的使用
- 566. Reshape the Matrix
- 微信小程序-高仿vivo商城
- 互质(互素)
- 【数据平台】Pytorch库初识
- 模板方法模式浅析
- MapReduce 练习三 文件倒排
- VB操作xml的简单例子
- screen命令详解 linux中的"远程神器"
- 简单的二级购物车
- hdoj 1178 Heritage from father (数学题)
- 重新对DataTable排序
- Spark一些配置参数整理