实验日志

来源:互联网 发布:股票期权 知乎 编辑:程序博客网 时间:2024/04/28 18:13

2017年月17日

实验1

训练网络:LFRfromS_solver_gender.prototxt

- 来源 说明 训练集 CelebA 194124张图片 验证集 Adience 第0折gender_val_lmdb 测试集 Adience 第0折gender_test_lmdb 测试模型 测试结果 说明 caffenet_gender_iter_35000.caffemodel 0.83075 这里写图片描述 caffenet_gender_iter_45000.caffemodel 0.83075 这里写图片描述

实验2

实验说明:
将CelebA和Adience的第0折训练集合并成一个训练集,然后将Adience的第0折测试集作为验证集。
在训练过程中观察准确率。
训练网络:LFRfromS_solver_gender.prototxt

- 来源 说明 训练集 CelebA+Adience(第0折训练集) 206371张图片 验证集 Adience(第0折的测试集) 测试集 无测试集 用测试集替代验证集,直接在训练时查看准确率

验证集上,最好的准确率出现在第20000代,accuracy = 0.871656

实验3 原始网络

说明:
Age and Gender Classification using Convolutional Neural Networks论文的原网络。

- 来源 说明 训练集 Adience fold_0 gender_train_lmdb 验证集 Adience fold_0 gender_val_lmdb 测试集 Adience fold_0 gender_test_lmdb 测试集准确率 这里写图片描述

实验4 原始网络

说明:
Age and Gender Classification using Convolutional Neural Networks论文的原网络。

- 来源 说明 训练集 CelebA affined gender_train_lmdb 验证集 Adience fold_0 gender_val_lmdb 准确率为:0.6765 测试集 Adience fold_0 gender_test_lmdb 测试集准确率 这里写图片描述

实验5 原始网络

说明:
Age and Gender Classification using Convolutional Neural Networks论文的原网络。

- 来源 说明 训练集 Adience fold_0 + CelebA_affined 验证集 Adience fold_0 gender_val_lmdb 准确率为:30000代,0.842 测试集 Adience fold_0 gender_test_lmdb 测试集准确率 这里写图片描述 测试集上测试

总结

训练集/网络 cnn_age_gender_net LFR LFR_fine_tune Adience val: 0.8828/test:0.821 val:0.787/test:0.705 val:0.9325/test:8.8457 CelebA 0.678/0.738 0.611/0.661 0.820/0.831 Adience+CelebA 0.843/0.857 0.857/0.842 0.871/0.8277

2017年5月3日

抽取VGG16的pool5的特征,使用libsvm做分类

抽取特征:extract_vgg_feature.py

# coding=utf-8import numpy as np import sysimport osimport structimport scipy.io as sio    caffe_root = '/home/zwx/caffe/'sys.path.insert(0, caffe_root + 'python')import caffe# deployPrototxt = ''# modelFile = ''# meanFile = '' # 也可以自己生产# 需要提取的图像列表# imageListFile = ''# imageBasePath = ''gpuIndex = 2# 初始化函数的相关操作def initialize(deployPrototxt, modelFile):    print 'initializing ...'    caffe.set_mode_gpu()    caffe.set_device(gpuIndex)    net = caffe.Net(deployPrototxt, modelFile, caffe.TEST)    return net# 提取特征并保存为相应的文件def extractFeature(imageBasePath, imageList, net, fea_mat, nd):    # 对输入数据做出相应的调整    transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) #设定图片的shape格式(1,3,224,224)    transformer.set_transpose('data', (2, 0, 1)) #改变维度的顺序,由原始图片(28,28,3)变为(3,224,224)      # transformer.set_mean('data', np.load(mean_file).mean(1).mean(1)) # 减去均值,前面训练模型时如果没有减去均值,这儿就不用    transformer.set_raw_scale('data', 255) # 缩放到[0,255]之间    transformer.set_channel_swap('data', (2, 1, 0)) # 交换通道,将图片由RGB变为GBR    # 设置batchsize,如果图片较多就设置合适的batchsize    batchsize = 1    net.blobs['data'].reshape(batchsize, 3, 224, 224)    mean_value = np.array([104, 117, 123], dtype=np.float32)    mean_value = mean_value.reshape([3, 1, 1])    num = 0    fea = np.ndarray(shape=(len(imageList),nd,1,1))    for imagefile in imageList:        imagefile_abs = os.path.join(imageBasePath, imagefile)        print imagefile_abs        im = caffe.io.load_image(imagefile_abs) # 加载图片        net.blobs['data'].data[...] = transformer.preprocess('data', im) - mean_value# 执行上面的图片预处理操作,并将图片载入到blob中        out = net.forward() # 执行测试        tmp = net.blobs['pool5'].data        fea[num,:,:,:] = tmp        num += 1    sio.savemat(fea_mat, {'pool5fea':fea})# 读取文件列表def readImageList(imageListFile):    imageList = []    with open(imageListFile, 'r') as fi:        while(True):            line = fi.readline().strip('\n')            if not line:                break            imageList.append(line)    print 'read imageList done image num ', len(imageList)    return imageListif __name__ == '__main__':    trainImageBasePath = '/home/zwx/database/VGG/images_train/'    testImageBasePath = '/home/zwx/database/VGG/images_test/'    deployPrototxt25 = '/home/zwx/workspace/VGG_classifier/ThiNet/0.25/deploy.prototxt'    deployPrototxt5 = '/home/zwx/workspace/VGG_classifier/ThiNet/0.5/deploy.prototxt'    modelFile25 = '/home/zwx/workspace/VGG_classifier/ThiNet/0.25/_iter_120000.caffemodel'    modelFile5 = '/home/zwx/workspace/VGG_classifier/ThiNet/0.5/ThiNet-GAP.caffemodel'    trainImageList = '/home/zwx/workspace/VGG_classifier/image_train.txt'    testImageList = '/home/zwx/workspace/VGG_classifier/image_test.txt'    trainImageList = readImageList(trainImageList)    testImageList = readImageList(testImageList)    # vgg 2.5    net = initialize(deployPrototxt25, modelFile25)    extractFeature(trainImageBasePath, trainImageList, net, './trainImagePool5Feature25.mat', 256)    extractFeature(testImageBasePath, testImageList, net, './testImagePool5Feature25.mat', 256)    # vgg 5    net = initialize(deployPrototxt5, modelFile5)    extractFeature(trainImageBasePath, trainImageList, net, './trainImagePool5Feature5.mat', 512)    extractFeature(testImageBasePath, testImageList, net, './testImagePool5Feature5.mat', 512)

实验一

% vgg 0.25train_fea=load('trainImagePool5Feature25');test_fea=load('testImagePool5Feature25');model = svmtrain(labels_train,train_fea.pool5fea);[predict_label, accuracy, dec_values] = svmpredict(labels_test, test_fea.pool5fea, model); % test the training data

结果:
这里写图片描述

实验二

% vgg 0.5train_fea=load('trainImagePool5Feature5');test_fea=load('testImagePool5Feature5');model = svmtrain(labels_train, train_fea.pool5fea);[predict_label, accuracy, dec_values] = svmpredict(labels_test, test_fea.pool5fea, model); % test the training data

这里写图片描述

0 0
原创粉丝点击