Caffe学习:使用pycaffe定义网络

来源:互联网 发布:辐射3 mac 中文版 编辑:程序博客网 时间:2024/06/05 03:49

使用pycaffe定义网络:

  • 参考链接:Learning LeNet

  • 引入库:

    import caffefrom caffe import layers as Lfrom caffe import params as P
  • 使用pycaffe定义Net:

    n = caffe.NetSpec()
  • 定义DataLayer:

    n.data, n.label = L.Data(batch_size=batch_size,                         backend=P.Data.LMDB, source=lmdb,                         transform_param=dict(scale=1. / 255), ntop=2)# 效果如下:layer {  name: "data"  type: "Data"  top: "data"  top: "label"  transform_param {    scale: 0.00392156862745  }  data_param {    source: "mnist/mnist_train_lmdb"    batch_size: 64    backend: LMDB  }}
  • 定义ConvolutionLayer:

    n.conv1 = L.Convolution(n.data, kernel_size=5,                        num_output=20, weight_filler=dict(type='xavier'))# 效果如下:layer {  name: "conv1"  type: "Convolution"  bottom: "data"  top: "conv1"  convolution_param {    num_output: 20    kernel_size: 5    weight_filler {      type: "xavier"    }  }}
  • 定义PoolingLayer:

    n.pool1 = L.Pooling(n.conv1, kernel_size=2,                    stride=2, pool=P.Pooling.MAX)# 效果如下:layer {  name: "pool1"  type: "Pooling"  bottom: "conv1"  top: "pool1"  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}
  • 定义InnerProductLayer:

    n.ip1 = L.InnerProduct(n.pool2, num_output=500,                       weight_filler=dict(type='xavier'))# 效果如下:layer {  name: "ip1"  type: "InnerProduct"  bottom: "pool2"  top: "ip1"  inner_product_param {    num_output: 500    weight_filler {      type: "xavier"    }  }}               
  • 定义ReLULayer:

    n.relu1 = L.ReLU(n.ip1, in_place=True)# 效果如下:layer {  name: "relu1"  type: "ReLU"  bottom: "ip1"  top: "ip1"}
  • 定义SoftmaxWithLossLayer:

    n.loss = L.SoftmaxWithLoss(n.ip2, n.label)# 效果如下:layer {  name: "loss"  type: "SoftmaxWithLoss"  bottom: "ip2"  bottom: "label"  top: "loss"}
  • 定义minst网络:

    import caffefrom caffe import layers as Lfrom caffe import params as Pdef lenet(lmdb, batch_size):    n = caffe.NetSpec()    n.data, n.label = L.Data(batch_size=batch_size,                             backend=P.Data.LMDB, source=lmdb,                             transform_param=dict(scale=1. / 255), ntop=2)    n.conv1 = L.Convolution(n.data, kernel_size=5,                            num_output=20, weight_filler=dict(type='xavier'))    n.pool1 = L.Pooling(n.conv1, kernel_size=2,                        stride=2, pool=P.Pooling.MAX)    n.conv2 = L.Convolution(n.pool1, kernel_size=5,                            num_output=50, weight_filler=dict(type='xavier'))    n.pool2 = L.Pooling(n.conv2, kernel_size=2,                        stride=2, pool=P.Pooling.MAX)    n.ip1 = L.InnerProduct(n.pool2, num_output=500,                           weight_filler=dict(type='xavier'))    n.relu1 = L.ReLU(n.ip1, in_place=True)    n.ip2 = L.InnerProduct(n.relu1, num_output=10,                           weight_filler=dict(type='xavier'))    n.loss = L.SoftmaxWithLoss(n.ip2, n.label)    return n.to_proto()
  • 保存网络定义:

    with open('mnist/lenet_auto_train.prototxt', 'w') as f:    f.write(str(lenet('mnist/mnist_train_lmdb', 64)))with open('mnist/lenet_auto_test.prototxt', 'w') as f:    f.write(str(lenet('mnist/mnist_test_lmdb', 100)))
  • lenet_auto_train.prototxt文件如下:(lenet_auto_test.prototxt文件类似):

    layer {  name: "data"  type: "Data"  top: "data"  top: "label"  transform_param {    scale: 0.00392156862745  }  data_param {    source: "mnist/mnist_train_lmdb"    batch_size: 64    backend: LMDB  }}layer {  name: "conv1"  type: "Convolution"  bottom: "data"  top: "conv1"  convolution_param {    num_output: 20    kernel_size: 5    weight_filler {      type: "xavier"    }  }}layer {  name: "pool1"  type: "Pooling"  bottom: "conv1"  top: "pool1"  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layer {  name: "conv2"  type: "Convolution"  bottom: "pool1"  top: "conv2"  convolution_param {    num_output: 50    kernel_size: 5    weight_filler {      type: "xavier"    }  }}layer {  name: "pool2"  type: "Pooling"  bottom: "conv2"  top: "pool2"  pooling_param {    pool: MAX    kernel_size: 2    stride: 2  }}layer {  name: "ip1"  type: "InnerProduct"  bottom: "pool2"  top: "ip1"  inner_product_param {    num_output: 500    weight_filler {      type: "xavier"    }  }}layer {  name: "relu1"  type: "ReLU"  bottom: "ip1"  top: "ip1"}layer {  name: "ip2"  type: "InnerProduct"  bottom: "ip1"  top: "ip2"  inner_product_param {    num_output: 10    weight_filler {      type: "xavier"    }  }}layer {  name: "loss"  type: "SoftmaxWithLoss"  bottom: "ip2"  bottom: "label"  top: "loss"}
原创粉丝点击