caffe入门示例--mnist手写数子集训练

来源:互联网 发布:matlab矩阵拆成列向量 编辑:程序博客网 时间:2024/06/11 08:28
图片转caffe所需的lmbd数据:
import lmdbimport caffedef image_to_lmdb(lmdb_file,image_list,label_list):    N = len(image_list)    map_size = image_list[0].nbytes * N *10    env = lmdb.Environment(lmdb_file, map_size=map_size)     with env.begin(write=True) as txn:      # txn is a Transaction object      for i in range(len(image_list)):        datum = caffe.proto.caffe_pb2.Datum()        img = image_list[i]        label = label_list[i]        img = img.transpose((2, 0, 1)) #        datum = caffe.io.array_to_datum(img)        datum.label = label        str_id = '%08d' % i #         txn.put(str_id.encode('ascii'), datum.SerializeToString())

设置prototxt文件,lenet-5网络

import caffefrom caffe import layers as L, params as Pdef lenet(lmdb_path, batch_size):    # our version of LeNet: a series of linear and simple nonlinear transformations    n = caffe.NetSpec()        n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb_path,                             transform_param=dict(scale=1./255), ntop=2)        n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))    n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)    n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))    n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)    n.fc1 =   L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))    n.relu1 = L.ReLU(n.fc1, in_place=True)    n.score = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))    n.loss =  L.SoftmaxWithLoss(n.score, n.label)        return n.to_proto()    with open('mnist/lenet_auto_train.prototxt', 'w') as f:    f.write(str(lenet('mnist/mnist_train_lmdb', 64)))    with open('mnist/lenet_auto_test.prototxt', 'w') as f:    f.write(str(lenet('mnist/mnist_test_lmdb', 100)))

设置solver文件(网络训练参数)

train_net: "mnist/lenet_auto_train.prototxt"test_net: "mnist/lenet_auto_test.prototxt"# test迭代次数test_iter: 100# 每训练test_interval次进行一次测试test_interval: 500# 初始学习率base_lr: 0.01# lr_policy:学习率随着时间是如何变化的,可选参数如下:# “step”——每 stepsize 次迭代之后, base_lr *= gamma# “multistep”——与step类似,需要设置stepvalue,学习率根据stepvalue进行变化。# “fixed”——学习率base_lr保持不变。# “inv”——学习率变化公式为base_lr * (1 + gamma * iter) ^ (- power)# “exp”——学习率变化公式为base_lr * gamma ^ iter}# “poly”——学习率以多项式形式衰减,到最大迭代次数时降为0。学习率变化公式为base_lr * (1 - iter/max_iter) ^ (power)。# “sigmoid”——学习率以S型曲线形式衰减,学习率变化公式为base_lr * (1 / (1 + exp(-gamma * (iter - stepsize))))。lr_policy: "inv"#学习速率变化因子gamma: 0.0001# 每 stepsize 次迭代,降低学习速率,用在lr_policy: "step"#stepsize: 3000# 这个参数表示什么时候应该进行训练的下一过程,值为正整数。主要用在lr_policy为multistep的情况。#stepvalue# 迭代display次打印信息display: 1000# 训练神经网络迭代的最大次数max_iter: 10000power: 0.75# 加速收敛系数: x = x + v; v = momentum * v - a*dx;momentum: 0.9# 权值衰减,防止过拟合weight_decay: 0.0005# 每迭代snapshot次就保存snapshot的model和solverstatesnapshot: 5000# 保存snapshot时model和solverstate的前缀snapshot_prefix: "mnist/custom_net"# use CPU or GPUsolver_mode: GPUrandom_seed: 831486# 反向传播算法:# Stochastic Gradient Descent “SGD”——随机梯度下降,默认值。# AdaDelta “AdaDelta”——一种”鲁棒的学习率方法“,是基于梯度的优化方法。# Adaptive Gradient “AdaGrad”——自适应梯度方法。# Adam “Adam”——一种基于梯度的优化方法。# Nesterov’s Accelerated Gradient “Nesterov”——Nesterov的加速梯度法,作为凸优化中最理想的方法,其收敛速度非常快。# RMSprop “RMSProp”——一种基于梯度的优化方法。type: "SGD"
进行训练

import caffe
caffe.set_device(0)caffe.set_mode_gpu()# load the solversolver = caffe.get_solver('mnist/solver.prototxt')solver.solve()





阅读全文
0 0
原创粉丝点击