caffe基础-14cifar10数据集的训练

来源:互联网 发布:淘宝恶意举报假货 编辑:程序博客网 时间:2024/05/17 23:32

准备数据集

  • cifar10的数据集介绍见官网:https://www.cs.toronto.edu/~kriz/cifar.html
  • 它是个小的数据集,有10个类。每张图片的size是33*32*3,一共有60000张图片。
  • 我们可以在官网下载其二进制的版本,放到~/caffe/data/cifar10文件夹下。
  • 执行脚本:get_cifar10.sh,其内容如下,要注意注释掉wget
    命令(因为已经下载过了)。
#!/usr/bin/env sh# This scripts downloads the CIFAR10 (binary version) data and unzips it.DIR="$( cd "$(dirname "$0")" ; pwd -P )"cd "$DIR"echo "Downloading..."#wget --no-check-certificate http://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gzecho "Unzipping..."tar -xf cifar-10-binary.tar.gz && rm -f cifar-10-binary.tar.gzmv cifar-10-batches-bin/* . && rm -rf cifar-10-batches-bin# Creation is split out because leveldb sometimes causes segfault# and needs to be re-created.echo "Done."
  • 这样在该文件夹下就多了一些bin文件了。

这里写图片描述

制作lmdb数据源

  • 制作数据源的命令已经提供,在~/caffe/examples/cifar10有个create_cifar10.sh文件:
#!/usr/bin/env sh# This script converts the cifar data into leveldb format.set -eEXAMPLE=/home/terrence/caffe/examples/cifar10DATA=/home/terrence/caffe/data/cifar10DBTYPE=lmdbecho "Creating $DBTYPE..."rm -rf $EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/cifar10_test_$DBTYPE/home/terrence/caffe/build/examples/cifar10/convert_cifar_data.bin $DATA $EXAMPLE $DBTYPEecho "Computing image mean..."/home/terrence/caffe/build/tools/compute_image_mean -backend=$DBTYPE \  $EXAMPLE/cifar10_train_$DBTYPE $EXAMPLE/mean.binaryprotoecho "Done."
  • 可见,已经做了去均值的操作了。
  • 执行指令后,文件夹下多出了两个lmdb文件:
    这里写图片描述

训练数据

  • ~/caffe/examples/cifar10/ 文件夹下有快速训练指令,打开文件: train_quick.sh
#!/usr/bin/env shset -eTOOLS=./build/tools$TOOLS/caffe train \  --solver=examples/cifar10/cifar10_quick_solver.prototxt $@# reduce learning rate by factor of 10 after 8 epochs$TOOLS/caffe train \  --solver=examples/cifar10/cifar10_quick_solver_lr1.prototxt \  --snapshot=examples/cifar10/cifar10_quick_iter_4000.solverstate $@
  • 找到 cifar10_quick_solver.prototxt文件,其内容如下:
# reduce the learning rate after 8 epochs (4000 iters) by a factor of 10# The train/test net protocol buffer definitionnet: "examples/cifar10/cifar10_quick_train_test.prototxt"# test_iter specifies how many forward passes the test should carry out.# In the case of MNIST, we have test batch size 100 and 100 test iterations,# covering the full 10,000 testing images.test_iter: 100# Carry out testing every 500 training iterations.test_interval: 500# The base learning rate, momentum and the weight decay of the network.base_lr: 0.001momentum: 0.9weight_decay: 0.004# The learning rate policylr_policy: "fixed"# Display every 100 iterationsdisplay: 100# The maximum number of iterationsmax_iter: 4000# snapshot intermediate resultssnapshot: 4000snapshot_prefix: "examples/cifar10/cifar10_quick"# solver mode: CPU or GPUsolver_mode: GPU
  • 接下来打开其模型描述文件examples/cifar10/cifar10_quick_train_test.prototxt
name: "CIFAR10_quick"layer {  name: "cifar"  type: "Data"  top: "data"  top: "label"  include {    phase: TRAIN  }  transform_param {    mean_file: "examples/cifar10/mean.binaryproto"  }  data_param {    source: "examples/cifar10/cifar10_train_lmdb"    batch_size: 100    backend: LMDB  }}layer {  name: "cifar"  type: "Data"  top: "data"  top: "label"  include {    phase: TEST  }  transform_param {    mean_file: "examples/cifar10/mean.binaryproto"  }  data_param {    source: "examples/cifar10/cifar10_test_lmdb"    batch_size: 100    backend: LMDB  }}layer {  name: "conv1"  type: "Convolution"  bottom: "data"  top: "conv1"  param {    lr_mult: 1  }  param {    lr_mult: 2  }  convolution_param {    num_output: 32    pad: 2    kernel_size: 5    stride: 1    weight_filler {      type: "gaussian"      std: 0.0001    }    bias_filler {      type: "constant"    }  }}layer {  name: "pool1"  type: "Pooling"  bottom: "conv1"  top: "pool1"  pooling_param {    pool: MAX    kernel_size: 3    stride: 2  }}layer {  name: "relu1"  type: "ReLU"  bottom: "pool1"  top: "pool1"}layer {  name: "conv2"  type: "Convolution"  bottom: "pool1"  top: "conv2"  param {    lr_mult: 1  }  param {    lr_mult: 2  }  convolution_param {    num_output: 32    pad: 2    kernel_size: 5    stride: 1    weight_filler {      type: "gaussian"      std: 0.01    }    bias_filler {      type: "constant"    }  }}layer {  name: "relu2"  type: "ReLU"  bottom: "conv2"  top: "conv2"}layer {  name: "pool2"  type: "Pooling"  bottom: "conv2"  top: "pool2"  pooling_param {    pool: AVE    kernel_size: 3    stride: 2  }}layer {  name: "conv3"  type: "Convolution"  bottom: "pool2"  top: "conv3"  param {    lr_mult: 1  }  param {    lr_mult: 2  }  convolution_param {    num_output: 64    pad: 2    kernel_size: 5    stride: 1    weight_filler {      type: "gaussian"      std: 0.01    }    bias_filler {      type: "constant"    }  }}layer {  name: "relu3"  type: "ReLU"  bottom: "conv3"  top: "conv3"}layer {  name: "pool3"  type: "Pooling"  bottom: "conv3"  top: "pool3"  pooling_param {    pool: AVE    kernel_size: 3    stride: 2  }}layer {  name: "ip1"  type: "InnerProduct"  bottom: "pool3"  top: "ip1"  param {    lr_mult: 1  }  param {    lr_mult: 2  }  inner_product_param {    num_output: 64    weight_filler {      type: "gaussian"      std: 0.1    }    bias_filler {      type: "constant"    }  }}layer {  name: "ip2"  type: "InnerProduct"  bottom: "ip1"  top: "ip2"  param {    lr_mult: 1  }  param {    lr_mult: 2  }  inner_product_param {    num_output: 10    weight_filler {      type: "gaussian"      std: 0.1    }    bias_filler {      type: "constant"    }  }}layer {  name: "accuracy"  type: "Accuracy"  bottom: "ip2"  bottom: "label"  top: "accuracy"  include {    phase: TEST  }}layer {  name: "loss"  type: "SoftmaxWithLoss"  bottom: "ip2"  bottom: "label"  top: "loss"}
  • 具体关于这个模型的介绍我会在后面的文章做一个总结,这里就不再多说。

  • 最后,转到caffe根目录,运行超参数文件:sudo time sh examples/cifar10/train_quick.sh 即可训练数据。

  • 训练完成,log打印如下:

I0924 10:33:24.523983  3531 layer_factory.hpp:77] Creating layer cifarI0924 10:33:24.524039  3531 db_lmdb.cpp:35] Opened lmdb examples/cifar10/cifar10_test_lmdbI0924 10:33:24.524055  3531 net.cpp:84] Creating Layer cifarI0924 10:33:24.524075  3531 net.cpp:380] cifar -> dataI0924 10:33:24.524097  3531 net.cpp:380] cifar -> labelI0924 10:33:24.524119  3531 data_transformer.cpp:25] Loading mean file from: examples/cifar10/mean.binaryprotoI0924 10:33:24.524245  3531 data_layer.cpp:45] output data size: 100,3,32,32I0924 10:33:24.527096  3531 net.cpp:122] Setting up cifarI0924 10:33:24.527137  3531 net.cpp:129] Top shape: 100 3 32 32 (307200)I0924 10:33:24.527156  3531 net.cpp:129] Top shape: 100 (100)I0924 10:33:24.527161  3531 net.cpp:137] Memory required for data: 1229200I0924 10:33:24.527168  3531 layer_factory.hpp:77] Creating layer label_cifar_1_splitI0924 10:33:24.527191  3531 net.cpp:84] Creating Layer label_cifar_1_splitI0924 10:33:24.527212  3531 net.cpp:406] label_cifar_1_split <- labelI0924 10:33:24.527220  3531 net.cpp:380] label_cifar_1_split -> label_cifar_1_split_0I0924 10:33:24.527242  3531 net.cpp:380] label_cifar_1_split -> label_cifar_1_split_1I0924 10:33:24.527304  3531 net.cpp:122] Setting up label_cifar_1_splitI0924 10:33:24.527313  3531 net.cpp:129] Top shape: 100 (100)I0924 10:33:24.527318  3531 net.cpp:129] Top shape: 100 (100)I0924 10:33:24.527323  3531 net.cpp:137] Memory required for data: 1230000I0924 10:33:24.527341  3531 layer_factory.hpp:77] Creating layer conv1I0924 10:33:24.527353  3531 net.cpp:84] Creating Layer conv1I0924 10:33:24.527371  3531 net.cpp:406] conv1 <- dataI0924 10:33:24.527379  3531 net.cpp:380] conv1 -> conv1I0924 10:33:24.527643  3531 net.cpp:122] Setting up conv1I0924 10:33:24.527654  3531 net.cpp:129] Top shape: 100 32 32 32 (3276800)I0924 10:33:24.527660  3531 net.cpp:137] Memory required for data: 14337200I0924 10:33:24.527671  3531 layer_factory.hpp:77] Creating layer pool1I0924 10:33:24.527681  3531 net.cpp:84] Creating Layer pool1I0924 10:33:24.527688  3531 net.cpp:406] pool1 <- conv1I0924 10:33:24.527720  3531 net.cpp:380] pool1 -> pool1I0924 10:33:24.527752  3531 net.cpp:122] Setting up pool1I0924 10:33:24.527761  3531 net.cpp:129] Top shape: 100 32 16 16 (819200)I0924 10:33:24.527766  3531 net.cpp:137] Memory required for data: 17614000I0924 10:33:24.527772  3531 layer_factory.hpp:77] Creating layer relu1I0924 10:33:24.527779  3531 net.cpp:84] Creating Layer relu1I0924 10:33:24.527786  3531 net.cpp:406] relu1 <- pool1I0924 10:33:24.527792  3531 net.cpp:367] relu1 -> pool1 (in-place)I0924 10:33:24.527801  3531 net.cpp:122] Setting up relu1I0924 10:33:24.527807  3531 net.cpp:129] Top shape: 100 32 16 16 (819200)I0924 10:33:24.527813  3531 net.cpp:137] Memory required for data: 20890800I0924 10:33:24.527819  3531 layer_factory.hpp:77] Creating layer conv2I0924 10:33:24.527827  3531 net.cpp:84] Creating Layer conv2I0924 10:33:24.527833  3531 net.cpp:406] conv2 <- pool1I0924 10:33:24.527858  3531 net.cpp:380] conv2 -> conv2I0924 10:33:24.528939  3531 net.cpp:122] Setting up conv2I0924 10:33:24.528959  3531 net.cpp:129] Top shape: 100 32 16 16 (819200)I0924 10:33:24.528966  3531 net.cpp:137] Memory required for data: 24167600I0924 10:33:24.528976  3531 layer_factory.hpp:77] Creating layer relu2I0924 10:33:24.528985  3531 net.cpp:84] Creating Layer relu2I0924 10:33:24.528991  3531 net.cpp:406] relu2 <- conv2I0924 10:33:24.529000  3531 net.cpp:367] relu2 -> conv2 (in-place)I0924 10:33:24.529009  3531 net.cpp:122] Setting up relu2I0924 10:33:24.529016  3531 net.cpp:129] Top shape: 100 32 16 16 (819200)I0924 10:33:24.529022  3531 net.cpp:137] Memory required for data: 27444400I0924 10:33:24.529028  3531 layer_factory.hpp:77] Creating layer pool2I0924 10:33:24.529036  3531 net.cpp:84] Creating Layer pool2I0924 10:33:24.529042  3531 net.cpp:406] pool2 <- conv2I0924 10:33:24.529049  3531 net.cpp:380] pool2 -> pool2I0924 10:33:24.529065  3531 net.cpp:122] Setting up pool2I0924 10:33:24.529073  3531 net.cpp:129] Top shape: 100 32 8 8 (204800)I0924 10:33:24.529079  3531 net.cpp:137] Memory required for data: 28263600I0924 10:33:24.529084  3531 layer_factory.hpp:77] Creating layer conv3I0924 10:33:24.529094  3531 net.cpp:84] Creating Layer conv3I0924 10:33:24.529101  3531 net.cpp:406] conv3 <- pool2I0924 10:33:24.529110  3531 net.cpp:380] conv3 -> conv3I0924 10:33:24.530513  3531 net.cpp:122] Setting up conv3I0924 10:33:24.530524  3531 net.cpp:129] Top shape: 100 64 8 8 (409600)I0924 10:33:24.530529  3531 net.cpp:137] Memory required for data: 29902000I0924 10:33:24.530544  3531 layer_factory.hpp:77] Creating layer relu3I0924 10:33:24.530555  3531 net.cpp:84] Creating Layer relu3I0924 10:33:24.530560  3531 net.cpp:406] relu3 <- conv3I0924 10:33:24.530567  3531 net.cpp:367] relu3 -> conv3 (in-place)I0924 10:33:24.530575  3531 net.cpp:122] Setting up relu3I0924 10:33:24.530583  3531 net.cpp:129] Top shape: 100 64 8 8 (409600)I0924 10:33:24.530589  3531 net.cpp:137] Memory required for data: 31540400I0924 10:33:24.530596  3531 layer_factory.hpp:77] Creating layer pool3I0924 10:33:24.530604  3531 net.cpp:84] Creating Layer pool3I0924 10:33:24.530611  3531 net.cpp:406] pool3 <- conv3I0924 10:33:24.530616  3531 net.cpp:380] pool3 -> pool3I0924 10:33:24.530635  3531 net.cpp:122] Setting up pool3I0924 10:33:24.530644  3531 net.cpp:129] Top shape: 100 64 4 4 (102400)I0924 10:33:24.530649  3531 net.cpp:137] Memory required for data: 31950000I0924 10:33:24.530655  3531 layer_factory.hpp:77] Creating layer ip1I0924 10:33:24.530663  3531 net.cpp:84] Creating Layer ip1I0924 10:33:24.530669  3531 net.cpp:406] ip1 <- pool3I0924 10:33:24.530678  3531 net.cpp:380] ip1 -> ip1I0924 10:33:24.532598  3531 net.cpp:122] Setting up ip1I0924 10:33:24.532613  3531 net.cpp:129] Top shape: 100 64 (6400)I0924 10:33:24.532618  3531 net.cpp:137] Memory required for data: 31975600I0924 10:33:24.532627  3531 layer_factory.hpp:77] Creating layer ip2I0924 10:33:24.532635  3531 net.cpp:84] Creating Layer ip2I0924 10:33:24.532644  3531 net.cpp:406] ip2 <- ip1I0924 10:33:24.532662  3531 net.cpp:380] ip2 -> ip2I0924 10:33:24.532747  3531 net.cpp:122] Setting up ip2I0924 10:33:24.532755  3531 net.cpp:129] Top shape: 100 10 (1000)I0924 10:33:24.532762  3531 net.cpp:137] Memory required for data: 31979600I0924 10:33:24.532770  3531 layer_factory.hpp:77] Creating layer ip2_ip2_0_splitI0924 10:33:24.532778  3531 net.cpp:84] Creating Layer ip2_ip2_0_splitI0924 10:33:24.532784  3531 net.cpp:406] ip2_ip2_0_split <- ip2I0924 10:33:24.532790  3531 net.cpp:380] ip2_ip2_0_split -> ip2_ip2_0_split_0I0924 10:33:24.532799  3531 net.cpp:380] ip2_ip2_0_split -> ip2_ip2_0_split_1I0924 10:33:24.532826  3531 net.cpp:122] Setting up ip2_ip2_0_splitI0924 10:33:24.532835  3531 net.cpp:129] Top shape: 100 10 (1000)I0924 10:33:24.532846  3531 net.cpp:129] Top shape: 100 10 (1000)I0924 10:33:24.532852  3531 net.cpp:137] Memory required for data: 31987600I0924 10:33:24.532858  3531 layer_factory.hpp:77] Creating layer accuracyI0924 10:33:24.532881  3531 net.cpp:84] Creating Layer accuracyI0924 10:33:24.532888  3531 net.cpp:406] accuracy <- ip2_ip2_0_split_0I0924 10:33:24.532909  3531 net.cpp:406] accuracy <- label_cifar_1_split_0I0924 10:33:24.532929  3531 net.cpp:380] accuracy -> accuracyI0924 10:33:24.532938  3531 net.cpp:122] Setting up accuracyI0924 10:33:24.532945  3531 net.cpp:129] Top shape: (1)I0924 10:33:24.532951  3531 net.cpp:137] Memory required for data: 31987604I0924 10:33:24.532956  3531 layer_factory.hpp:77] Creating layer lossI0924 10:33:24.532964  3531 net.cpp:84] Creating Layer lossI0924 10:33:24.532970  3531 net.cpp:406] loss <- ip2_ip2_0_split_1I0924 10:33:24.532976  3531 net.cpp:406] loss <- label_cifar_1_split_1I0924 10:33:24.532984  3531 net.cpp:380] loss -> lossI0924 10:33:24.532992  3531 layer_factory.hpp:77] Creating layer lossI0924 10:33:24.533048  3531 net.cpp:122] Setting up lossI0924 10:33:24.533059  3531 net.cpp:129] Top shape: (1)I0924 10:33:24.533066  3531 net.cpp:132]     with loss weight 1I0924 10:33:24.533083  3531 net.cpp:137] Memory required for data: 31987608I0924 10:33:24.533089  3531 net.cpp:198] loss needs backward computation.I0924 10:33:24.533095  3531 net.cpp:200] accuracy does not need backward computation.I0924 10:33:24.533103  3531 net.cpp:198] ip2_ip2_0_split needs backward computation.I0924 10:33:24.533109  3531 net.cpp:198] ip2 needs backward computation.I0924 10:33:24.533114  3531 net.cpp:198] ip1 needs backward computation.I0924 10:33:24.533119  3531 net.cpp:198] pool3 needs backward computation.I0924 10:33:24.533125  3531 net.cpp:198] relu3 needs backward computation.I0924 10:33:24.533131  3531 net.cpp:198] conv3 needs backward computation.I0924 10:33:24.533136  3531 net.cpp:198] pool2 needs backward computation.I0924 10:33:24.533143  3531 net.cpp:198] relu2 needs backward computation.I0924 10:33:24.533149  3531 net.cpp:198] conv2 needs backward computation.I0924 10:33:24.533154  3531 net.cpp:198] relu1 needs backward computation.I0924 10:33:24.533162  3531 net.cpp:198] pool1 needs backward computation.I0924 10:33:24.533167  3531 net.cpp:198] conv1 needs backward computation.I0924 10:33:24.533174  3531 net.cpp:200] label_cifar_1_split does not need backward computation.I0924 10:33:24.533180  3531 net.cpp:200] cifar does not need backward computation.I0924 10:33:24.533186  3531 net.cpp:242] This network produces output accuracyI0924 10:33:24.533192  3531 net.cpp:242] This network produces output lossI0924 10:33:24.533205  3531 net.cpp:255] Network initialization done.I0924 10:33:24.533248  3531 solver.cpp:56] Solver scaffolding done.I0924 10:33:24.533486  3531 caffe.cpp:242] Resuming from examples/cifar10/cifar10_quick_iter_4000.solverstateI0924 10:33:24.534968  3531 sgd_solver.cpp:318] SGDSolver: restoring historyI0924 10:33:24.535162  3531 caffe.cpp:248] Starting OptimizationI0924 10:33:24.535171  3531 solver.cpp:272] Solving CIFAR10_quickI0924 10:33:24.535177  3531 solver.cpp:273] Learning Rate Policy: fixedI0924 10:33:24.535706  3531 solver.cpp:330] Iteration 4000, Testing net (#0)I0924 10:33:26.685408  3536 data_layer.cpp:73] Restarting data prefetching from start.I0924 10:33:26.773063  3531 solver.cpp:397]     Test net output #0: accuracy = 0.7168I0924 10:33:26.773087  3531 solver.cpp:397]     Test net output #1: loss = 0.848362 (* 1 = 0.848362 loss)I0924 10:33:26.831161  3531 solver.cpp:218] Iteration 4000 (1742.17 iter/s, 2.29599s/100 iters), loss = 0.583209I0924 10:33:26.831185  3531 solver.cpp:237]     Train net output #0: loss = 0.583209 (* 1 = 0.583209 loss)I0924 10:33:26.831209  3531 sgd_solver.cpp:105] Iteration 4000, lr = 0.0001I0924 10:33:32.549178  3531 solver.cpp:218] Iteration 4100 (17.4885 iter/s, 5.71804s/100 iters), loss = 0.621489I0924 10:33:32.549214  3531 solver.cpp:237]     Train net output #0: loss = 0.621489 (* 1 = 0.621489 loss)I0924 10:33:32.549237  3531 sgd_solver.cpp:105] Iteration 4100, lr = 0.0001I0924 10:33:38.263624  3531 solver.cpp:218] Iteration 4200 (17.4995 iter/s, 5.71446s/100 iters), loss = 0.511802I0924 10:33:38.263659  3531 solver.cpp:237]     Train net output #0: loss = 0.511802 (* 1 = 0.511802 loss)I0924 10:33:38.263682  3531 sgd_solver.cpp:105] Iteration 4200, lr = 0.0001I0924 10:33:43.976765  3531 solver.cpp:218] Iteration 4300 (17.5035 iter/s, 5.71316s/100 iters), loss = 0.478999I0924 10:33:43.976801  3531 solver.cpp:237]     Train net output #0: loss = 0.478999 (* 1 = 0.478999 loss)I0924 10:33:43.976824  3531 sgd_solver.cpp:105] Iteration 4300, lr = 0.0001I0924 10:33:49.691663  3531 solver.cpp:218] Iteration 4400 (17.4981 iter/s, 5.71491s/100 iters), loss = 0.53704I0924 10:33:49.691699  3531 solver.cpp:237]     Train net output #0: loss = 0.53704 (* 1 = 0.53704 loss)I0924 10:33:49.691722  3531 sgd_solver.cpp:105] Iteration 4400, lr = 0.0001I0924 10:33:55.126360  3535 data_layer.cpp:73] Restarting data prefetching from start.I0924 10:33:55.325109  3531 solver.cpp:330] Iteration 4500, Testing net (#0)I0924 10:33:57.500412  3536 data_layer.cpp:73] Restarting data prefetching from start.I0924 10:33:57.588099  3531 solver.cpp:397]     Test net output #0: accuracy = 0.7539I0924 10:33:57.588124  3531 solver.cpp:397]     Test net output #1: loss = 0.759468 (* 1 = 0.759468 loss)I0924 10:33:57.645143  3531 solver.cpp:218] Iteration 4500 (12.573 iter/s, 7.95352s/100 iters), loss = 0.504953I0924 10:33:57.645170  3531 solver.cpp:237]     Train net output #0: loss = 0.504953 (* 1 = 0.504953 loss)I0924 10:33:57.645193  3531 sgd_solver.cpp:105] Iteration 4500, lr = 0.0001I0924 10:34:03.363797  3531 solver.cpp:218] Iteration 4600 (17.4866 iter/s, 5.71868s/100 iters), loss = 0.539086I0924 10:34:03.363833  3531 solver.cpp:237]     Train net output #0: loss = 0.539086 (* 1 = 0.539086 loss)I0924 10:34:03.363857  3531 sgd_solver.cpp:105] Iteration 4600, lr = 0.0001I0924 10:34:09.086326  3531 solver.cpp:218] Iteration 4700 (17.4747 iter/s, 5.72254s/100 iters), loss = 0.479981I0924 10:34:09.086364  3531 solver.cpp:237]     Train net output #0: loss = 0.479981 (* 1 = 0.479981 loss)I0924 10:34:09.086372  3531 sgd_solver.cpp:105] Iteration 4700, lr = 0.0001I0924 10:34:14.808562  3531 solver.cpp:218] Iteration 4800 (17.4756 iter/s, 5.72225s/100 iters), loss = 0.448841I0924 10:34:14.808598  3531 solver.cpp:237]     Train net output #0: loss = 0.448841 (* 1 = 0.448841 loss)I0924 10:34:14.808605  3531 sgd_solver.cpp:105] Iteration 4800, lr = 0.0001I0924 10:34:20.530061  3531 solver.cpp:218] Iteration 4900 (17.4779 iter/s, 5.72152s/100 iters), loss = 0.514297I0924 10:34:20.530097  3531 solver.cpp:237]     Train net output #0: loss = 0.514297 (* 1 = 0.514297 loss)I0924 10:34:20.530119  3531 sgd_solver.cpp:105] Iteration 4900, lr = 0.0001I0924 10:34:25.963594  3535 data_layer.cpp:73] Restarting data prefetching from start.I0924 10:34:26.162818  3531 solver.cpp:457] Snapshotting to HDF5 file examples/cifar10/cifar10_quick_iter_5000.caffemodel.h5I0924 10:34:26.193091  3531 sgd_solver.cpp:283] Snapshotting solver state to HDF5 file examples/cifar10/cifar10_quick_iter_5000.solverstate.h5I0924 10:34:26.216594  3531 solver.cpp:310] Iteration 5000, loss = 0.48887I0924 10:34:26.216617  3531 solver.cpp:330] Iteration 5000, Testing net (#0)I0924 10:34:28.371587  3536 data_layer.cpp:73] Restarting data prefetching from start.I0924 10:34:28.459497  3531 solver.cpp:397]     Test net output #0: accuracy = 0.7525I0924 10:34:28.459523  3531 solver.cpp:397]     Test net output #1: loss = 0.756179 (* 1 = 0.756179 loss)I0924 10:34:28.459530  3531 solver.cpp:315] Optimization Done.I0924 10:34:28.459550  3531 caffe.cpp:259] Optimization Done.
  • 最后,达到的acc是75%左右。caffe还提供了full_train的方法,只需要修改文件名即可,这里就不在赘述。
原创粉丝点击