caffe中mnist中 lenet_train_test.prototxt和lenet.prototxt(deploy文件)区别
来源:互联网 发布:牛客网大数据面试 编辑:程序博客网 时间:2024/06/11 11:23
参照 http://blog.csdn.net/cham_3/article/details/52682479
http://blog.csdn.net/l18930738887/article/details/54898016
跑了下mnist训练,和使用训练好的模型进行预测
两个文件都在examples/mnist 中, lenet_train_test.prototxt 文件是设置train、test的网络,deploy文件是分类测试加载的文件。
大体区别思路是网络配置文件会规定的详细一些,deploy文件只是告诉分类程序(mnist_classification.bin)网络结构是怎么样的,不需要反向计算,不需要计算误差
两个文件内容先发出来:
cat lenet_train_test.prototxt
name: "LeNet"layer { name: "mnist" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { scale: 0.00390625 } data_param { source: "examples/mnist/mnist_train_lmdb" batch_size: 64 backend: LMDB }}layer { name: "mnist" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { scale: 0.00390625 } data_param { source: "examples/mnist/mnist_test_lmdb" batch_size: 100 backend: LMDB }}layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 20 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1"}layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 10 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "accuracy" type: "Accuracy" bottom: "ip2" bottom: "label" top: "accuracy" include { phase: TEST }}layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss"}
cat lenet.prototxt
name: "LeNet"layer { name: "data" type: "Input" top: "data" input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }}layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 20 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 }}layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1"}layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 10 weight_filler { type: "xavier" } bias_filler { type: "constant" } }}layer { name: "prob" type: "Softmax" bottom: "ip2" top: "prob"}
区别:
1.首先删除TEST使用的部分,例如开始的输入数据test部分,
layer { name: "mnist" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { scale: 0.00390625 } data_param { source: "examples/mnist/mnist_test_lmdb" batch_size: 100 backend: LMDB } }
最后的test精度部分
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
2. trian部分的输入数据部分修改,只需要告诉输入维度
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
改为
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }
}
shape: { dim: 64 dim: 1 dim: 28 dim: 28 }中第一个dim代表?,第二个dim代表channel个数1个(对于图片是RGB的,3通道就是3),第三第四个分别是width和height图片的。
3. 原来的最后一层loss层改为pro层
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 1
- 2
- 3
- 4
- 5
- 6
- 7
A:将其中的SoftmaxWithLoss替换为Softmax
B:删除其中的bottom:”label”行,因为测试时需要预测label而不是给你一个已知label。
C:同时将最后一层loss层改为pro层
因此改完之后最后一层就是这样的:
- 1
- 2
- 3
- 4
- 5
- 6
- 1
- 2
- 3
- 4
- 5
- 6
这里的name: “prob”就是你在预测时读取的layer的name,一定要对应上
- caffe中mnist中 lenet_train_test.prototxt和lenet.prototxt(deploy文件)区别
- caffe中train_val.prototxt和deploy.prototxt文件的区别
- caffe中train_val.prototxt和deploy.prototxt文件的区别
- caffe中train_val.prototxt文件和deploy.prototxt文件区别和转换--caffe学习(14)
- caffe中train_val.prototxt和deploy.prototxt转换 ResNet_18_deploy.prototxt
- 浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
- 浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
- 浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
- 浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
- 浅谈caffe中train_val.prototxt和deploy.prototxt文件的区别
- caffe中train_val.prototxt与deploy.prototxt的区别
- caffe中lenet_train_test.prototxt配置文件注解
- caffe生成lenet-5的deploy.prototxt文件(prototxt内容解析)
- 区分caffe中train.prototxt,solver.prototxt,deploy.prototxt等文件
- caffe生成lenet-5的deploy.prototxt文件
- caffe中 solver.prototxt文件
- caffe中 solver.prototxt文件
- train_val.prototxt和deploy.prototxt文件解读
- 微信授权登录 不需要注册
- 记一次数据迁移 mongo到mysql
- 一个可拖拽,移动,自由组合子控件的视图控件,让开发更简单
- assert()函数
- 网络通信应用框架apache mina(一)
- caffe中mnist中 lenet_train_test.prototxt和lenet.prototxt(deploy文件)区别
- java获取对象属性类型、属性名称、属性值
- Zbolg搬家
- Spring 被初始化两次(Spring-Task定时任务执行两次)分析和解决方法
- 解决angularjs锚点跟路由冲突的问题
- 与万千开发者一起,进入智能化的物联网时代
- jQuery Datatables 动态列
- POJ 1979 Red and Black
- 排序算法之插入排序