【Machine Learning】【Python】一、HoG + SVM 物体分类 ---- 《SVM物体分类和定位检测》

来源:互联网 发布:java class反编译工具 编辑:程序博客网 时间:2024/05/14 17:42

最新代码Github地址: https://github.com/HandsomeHans/SVM-classification-localization

最近在研究传统方法的物体分类,使用的方法是Hog + SVM,不过分类正确率不是很高。我写一写心得体会。

首先声明我不会具体将原理。因为网上信息太多了,大家自己先看看大致了解了再看这篇文章。


ImageNet上所有杯子数据集:https://pan.baidu.com/s/1i54BMLJ

一、HoG特征数计算

先说一下两个参数:

1. pixels of a cell

2. cells of a block

HoG是对图片进行滑框提取特征的,block就是这个框。block里有cell,在cell中进行特征提取。将bolck中所有cell 的特征整合在一起,就是这个block的特征。再将所有block的特征整合一起就是整张图片的特征了。

HoG在cell中提取特征的时候可以理解成是按角度来的,一般分9组,一组有40度,9组一共360度。每一组就代表一个特征值。那么我们就可以计算整张图的特征数了。

对于一张 300 × 600 的图片,我定义每个cell里像素是 15 × 15, 每个block里有2 × 2个cell, 图片一共有 10 × 20 个block

一个cell: 9 个特征

一个block: 4 × 9 = 36 个特征

图片一共有: 10 × 20 × 36 = 3600 个特征

二、代码

如果不对图片进行处理而直接丢如HoG的话,背景等因素会对最后提取的特征造成很大的不好影响。所以我提供的提取特征值的代码中包含两部分,一部分是根据xml文件中bbox信息切割出物体进行特征提取。另一部分是直接对整图进行特征提取。因为最后的特征值数量和输入图片大小有关,所以要对切割后的图片和整图统一resize到固定大小。更多细节看代码中注释。

HoG代码:

#!/usr/bin/env python2# -*- coding: utf-8 -*-"""Created on Tue Jun 13 10:24:50 2017@author: hans"""from skimage.feature import hogfrom sklearn.externals import joblibimport xml.dom.minidom as xdmimport numpy as npimport Imageimport cv2import osimport time# 定义参数normalize = Truevisualize = Falseblock_norm = 'L2-Hys'cells_per_block = [2,2]pixels_per_cell = [20,20]orientations = 9# xml文件路径train_xml_filePath = r'./train/Annotation'def getBox(childDir):    f_xml = os.path.join(train_xml_filePath, '%s.xml' %childDir.split('.')[0]) #整合路径    xml = xdm.parse(f_xml) # 读取文件    filename = xml.getElementsByTagName('filename')     filename = filename[0].firstChild.data.encode("utf-8") #读取文件名    xmin = xml.getElementsByTagName('xmin') # 左上像素坐标    xmin = int(xmin[0].firstChild.data)    ymin = xml.getElementsByTagName('ymin')    ymin = int(ymin[0].firstChild.data)    xmax = xml.getElementsByTagName('xmax') # 右下像素坐标    xmax = int(xmax[0].firstChild.data)    ymax = xml.getElementsByTagName('ymax')    ymax = int(ymax[0].firstChild.data)    box = (xmin,ymin,xmax,ymax)     return boxdef getDataWithCrop(filePath,label): # 获取经过切图后的数据    Data = []    num = 0    for childDir in os.listdir(filePath):        f_im = os.path.join(filePath, childDir)        image = Image.open(f_im) # 打开图片        box = getBox(childDir)        region = image.crop(box) # 切图        data = np.asarray(region) # 将图片数据放入一个多维数组中        data = cv2.resize(data,(200,200),interpolation=cv2.INTER_CUBIC) # resize图片        data = np.reshape(data, (200*200,3))         data.shape = 1,3,-1        fileName = np.array([[childDir]])        datalebels = zip(data, label, fileName) # 整合数据        Data.extend(datalebels) # 将整合好的每一条数据放到一个list中        num += 1        print "%d processing: %s" %(num,childDir)    return Data,numdef getData(filePath,label): # 获取整图数据,不切割    Data = []    num = 0    for childDir in os.listdir(filePath):        f = os.path.join(filePath, childDir)        data = cv2.imread(f)        data = cv2.resize(data,(200,200),interpolation=cv2.INTER_CUBIC)        data = np.reshape(data, (200 * 200,3))        data.shape = 1,3,-1        fileName = np.array([[childDir]])        datalebels = zip(data, label, fileName)        Data.extend(datalebels)        num += 1        print "%d processing: %s" %(num,childDir)    return Data,numdef getFeat(Data,mode): # 获取并保存特征值    num = 0    for data in Data:        image = np.reshape(data[0], (200, 200, 3)) # 复原图像        gray = rgb2gray(image)/255.0 # 转换成灰度图        fd = hog(gray, orientations, pixels_per_cell, cells_per_block, block_norm, visualize, normalize)        fd = np.concatenate((fd, (float(data[1]),))) # 将类别加到特征值数组的末尾        filename = list(data[2])        fd_name = filename[0].split('.')[0]+'.feat' # 设置特征值文件名        if mode == 'train':            fd_path = os.path.join('./features/train/', fd_name)        else:            fd_path = os.path.join('./features/test/', fd_name)        joblib.dump(fd, fd_path,compress=3) # 保存数据到本地        num += 1        print "%d saving: %s." %(num,fd_name)def rgb2gray(im):    gray = im[:, :, 0]*0.2989+im[:, :, 1]*0.5870+im[:, :, 2]*0.1140    return grayif __name__ == '__main__':    t0 = time.time()        # 处理Positive测试集和带切图的训练集    Ptrain_filePath = r'./train/positive'    Ptest_filePath = r'./test/positive'    PTrainData,P_train_num = getDataWithCrop(Ptrain_filePath,np.array([[1]]))    getFeat(PTrainData,'train')    PTestData,P_test_num = getData(Ptest_filePath,np.array([[1]]))    getFeat(PTestData,'test')        # 处理不切图的训练集    Pres_train_filePath = r'./train/positive_rest'    PresTrainData,Pres_train_num = getData(Pres_train_filePath,np.array([[1]]))    getFeat(PresTrainData,'train')        # 处理Negative测试集和不用切图的训练集    Ntrain_filePath = r'./train/negative'    Ntest_filePath = r'./test/negative'    NTrainData,N_train_num = getData(Ntrain_filePath,np.array([[0]]))    getFeat(NTrainData,'train')    NTestData,N_test_num = getData(Ntest_filePath,np.array([[0]]))    getFeat(NTestData,'test')        t1 = time.time()     print "------------------------------------------------"    print "Train Positive: %d" %(P_train_num + Pres_train_num)    print "Train Negative: %d" %N_train_num    print "Train Total: %d" %(P_train_num + Pres_train_num + N_train_num)    print "------------------------------------------------"    print "Test Positive: %d" %P_test_num    print "Test Negative: %d" %N_test_num    print "Test Total: %d" %(P_test_num+N_test_num)    print "------------------------------------------------"    print 'The cast of time is:%f'%(t1-t0)

路径说明:

./train/positive # 存放需要切图的带分类物体的训练集./test/positive # 存放带分类物体的测试集./train/positive_rest # 存放不需要切图的带分类物体的训练集./train/negative # 存放不带分类物体的训练集./test/negative # 存放不带分类物体的测试集

训练SVM代码:

#!/usr/bin/env python2# -*- coding: utf-8 -*-"""Created on Thu Jun 15 16:38:03 2017@author: hans"""import sklearn.svm as ssvfrom sklearn.externals import joblibimport globimport osimport timeif __name__ == "__main__":    model_path = './models/svm.model'    train_feat_path = './features/train'    fds = []    labels = []    num=0    for feat_path in glob.glob(os.path.join(train_feat_path, '*.feat')):        num += 1        data = joblib.load(feat_path) # 加载数据        fds.append(data[:-1]) # 分离特征值,存入一个list中        labels.append(data[-1])        print "%d Dealing with %s" %(num,feat_path)    t0 = time.time()#------------------------SVM--------------------------------------------------    clf = ssv.SVC(kernel='rbf')    print "Training a SVM Classifier."    clf.fit(fds, labels)    joblib.dump(clf, model_path) # 保存模型#------------------------SVM--------------------------------------------------    t1 = time.time()    print "Classifier saved to {}".format(model_path)    print 'The cast of time is :%f seconds' % (t1-t0)

测试SVM代码:

#!/usr/bin/env python2# -*- coding: utf-8 -*-"""Created on Thu Jun 15 16:44:53 2017@author: hans"""from sklearn.externals import joblibimport globimport osimport timeif __name__ == "__main__":    model_path = './models/svm.model'    test_feat_path = './features/test'    total=0    num=0    t0 = time.time()    clf = joblib.load(model_path)    for feat_path in glob.glob(os.path.join(test_feat_path, '*.feat')):        total += 1        print "%d processing: %s" %(total, feat_path)        data_test = joblib.load(feat_path)        data_test_feat = data_test[:-1].reshape((1,-1))        result = clf.predict(data_test_feat)        if int(result) == int(data_test[-1]):            num += 1        rate = float(num)/total        t1 = time.time()    print 'The classification accuracy is %f' %rate    print 'The cast of time is :%f seconds' % (t1-t0)

后记

我是从ILSVRC拿的图片,训练集8000多张,正样本和负样本一种一半。测试集2000多张图片,正负样本各一半。后续我又做了PCA,对正确率没什么影响,会增加模型的鲁棒性。下一篇我文章我放出加上PCA的代码。做PCA主要是后来我选择PSO求解最优化SVM参数C和gamma需要把特征降维,否则计算量太大。

阅读全文
0 0
原创粉丝点击