surf源码解析,附加源码下载链接
来源:互联网 发布:网上销售软件 编辑:程序博客网 时间:2024/06/06 01:53
说是surf,其实原算法是基于Notes on the OpenSURF Library这篇文档。
下载地址:http://opensurf1.googlecode.com/files/OpenSURFcpp.zip
首先定义 #define PROCEDURE 1
紧接着进入main()主函数:
int
main(
void
)
{
if
(PROCEDURE == 1)
return
mainImage();
//静态图像的特征点监测
if
(PROCEDURE == 2)
return
mainVideo();
//通过摄像头实时监测,提取特征点
if
(PROCEDURE == 3)
return
mainMatch();
//图像匹配算法,在视频序列里寻找一个已知物体图像
if
(PROCEDURE == 4)
return
mainMotionPoints();
//视频中行为监测
if
(PROCEDURE == 5)
return
mainStaticMatch();
//静态图像间的特征点匹配
if
(PROCEDURE == 6)
return
mainKmeans();
//静态图像的k-means聚类
}
我们以监测静态图像特征点(即1)为例了解surf算法如何提取特征点。
int mainImage(void)
{
// Declare Ipoints and other stuff
IpVec ipts;
IplImage *img=cvLoadImage(
"imgs/sf.jpg"
);
// Detect and describe interest points in the image
clock_t start = clock();
surfDetDes(img, ipts,
false
, 5, 4, 2, 0.0004f);
//URF描述子特征提取实现函数
clock_t end = clock();
std::cout<<
"OpenSURF found: "
<< ipts.size() <<
" interest points"
<< std::endl;
std::cout<<
"OpenSURF took: "
<< float(end - start) / CLOCKS_PER_SEC <<
" seconds"
<< std::endl;
// Draw the detected points
drawIpoints(img, ipts);
// Display the result
showImage(img);
return
0;
}
EXP:
IpVec的定义在ipoint.h里,typedef std::vector<Ipoint> IpVec; 也就是说,IpVec是一个vector向量,每个成员是一个Ipoint。而Ipoint是一个类,也在ipoint.h里,作者将它按照结构体使用,都是public。
clock()的使用是为了测试程序运行的时间,一个记录起始时间,一个记录结束时间,最终将两者只差输出,即surfDetDes()特征提取所需要的时间。
drawIpoints()函数是在img基础上增加特征点的描述,说白了,就是添加特征点的函数。
那么,现在进入到surfDetDes()函数内部来探个究竟吧。
surfDetDes定义在surflib.h里面:
//! Library function builds vector of described interest points
inline void surfDetDes(IplImage *img,
/* image to find Ipoints in */
std::vector<Ipoint> &ipts,
/* reference to vector of Ipoints */
bool upright =
false
,
/* run in rotation invariant mode? */
int octaves = OCTAVES,
/* number of octaves to calculate */
int intervals = INTERVALS,
/* number of intervals per octave */
int init_sample = INIT_SAMPLE,
/* initial sampling step */
float thres = THRES
/* blob response threshold */
)
{
// Create integral-image representation of the image
IplImage *int_img = Integral(img);
// Create Fast Hessian Object
FastHessian fh(int_img, ipts, octaves, intervals, init_sample, thres);
// Extract interest points and store in vector ipts
fh.getIpoints();
// Create Surf Descriptor Object
Surf des(int_img, ipts);
// Extract the descriptors for the ipts
des.getDescriptors(upright);
// Deallocate the integral image
cvReleaseImage(&int_img);
}
总的来说,surfDetDes()里面规划了具体的步骤:
1.获取积分图像init_img;
2.计算Hessian矩阵特征fh;
3.利用fh提取特征点;
4.创建surf特征描述符,并和特征点贯联;
5.释放积分图像。
①积分图像的实现
首先在Integral.cpp里面找到Integral(),如下:
IplImage *Integral(IplImage *source)
{
// convert the image to single channel 32f
IplImage *img = getGray(source);
IplImage *int_img = cvCreateImage(cvGetSize(img), IPL_DEPTH_32F, 1);
// set up variables for data access
int height = img->height;
int width = img->width;
int step = img->widthStep/sizeof(float);
float *data = (float *) img->imageData;
float *i_data = (float *) int_img->imageData;
// first row only
float rs = 0.0f;
for
(int j=0; j<width; j++)
{
rs += data[j];
i_data[j] = rs;
}
// remaining cells are sum above and to the left
for
(int i=1; i<height; ++i)
{
rs = 0.0f;
for
(int j=0; j<width; ++j)
{
rs += data[i*step+j];
i_data[i*step+j] = rs + i_data[(i-1)*step+j];
}
}
// release the gray image
cvReleaseImage(&img);
// return the integral image
return
int_img;
}
1. 首先将原输入转化为灰度图像,并创建一个大小等于灰度图像gray-image的图像数组--积分图像int_img。
2. 获取图像的信息,比如大小(高height和宽width)以及gray-image和积分图像int_img的数据首地址data && i_data。(注意此时数据类型为float)
3. 首先计算第一行像素的积分值,相当于一维数据的累加。其他数据通过迭代计算获取,i_data[i*step+j] = rs + i_data[(i-1)*step+j],若当前点为(i0,j0),其中rs就为第(i+1)行(即i=i0)行j<=j0之前所有像素值和。 如下所示:
[其中黑色为当前点i_data[i*step+j],绿色为i_data[(i-1)*step+j],而rs=横放着的和黑色点同行的那块矩形框对应的区域像素值之和]
4. 释放灰度图像,并返回积分图像。
相关函数integral.h中的BoxIntegral().
inline float BoxIntegral(IplImage *img, int row, int col, int rows, int cols)
其中,几个参数意思分别为源图像,row,col为A点的坐标值,rows和cols分别为高和宽。
利用上面的积分图像计算 A B 这样的box区域里面所有像素点的灰度值之和。S=int_img(D)+int_img(A)-int_img(B)-int_img(C).
C D
②Hessian矩阵特征的计算
FastHessian,计算hessian矩阵的类,它的定义在fasthessian.h里,实现在fasthessian.cpp里。
class FastHessian {
public:
//! Constructor without image
FastHessian(std::vector<Ipoint> &ipts,
const int octaves = OCTAVES,
const int intervals = INTERVALS,
const int init_sample = INIT_SAMPLE,
const float thres = THRES);
//! Constructor with image
FastHessian(IplImage *img,
std::vector<Ipoint> &ipts,
const int octaves = OCTAVES,
const int intervals = INTERVALS,
const int init_sample = INIT_SAMPLE,
const float thres = THRES);
//! Destructor
~FastHessian();
//! Save the parameters
void saveParameters(const int octaves,
const int intervals,
const int init_sample,
const float thres);
//! Set or re-set the integral image source
void setIntImage(IplImage *img);
//! Find the image features and write into vector of features
void getIpoints();
private:
//---------------- Private Functions -----------------//
//! Build map of DoH responses
void buildResponseMap();
//! Calculate DoH responses for supplied layer
void buildResponseLayer(ResponseLayer *r);
//! 3x3x3 Extrema test
int isExtremum(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b);
//! Interpolation functions - adapted from Lowe's SIFT implementation
void interpolateExtremum(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b);
void interpolateStep(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b,
double* xi, double* xr, double* xc );
CvMat* deriv3D(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b);
CvMat* hessian3D(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b);
//---------------- Private Variables -----------------//
//! Pointer to the integral Image, and its attributes
IplImage *img;
int i_width, i_height;
//! Reference to vector of features passed from outside
std::vector<Ipoint> &ipts;
//! Response stack of determinant of hessian values
std::vector<ResponseLayer *> responseMap;
//! Number of Octaves
int octaves;
//! Number of Intervals per octave
int intervals;
//! Initial sampling step for Ipoint detection
int init_sample;
//! Threshold value for blob resonses
float thresh;
};
在public里面定义了两种构造函数分别对应有无源图像这一项参数,紧接着还定义了析构函数~FastHessian等函数。下面在fasthessian.cpp对这些函数的实现一一解释:
两个构造函数都是调用了saveParameters(octaves, intervals, init_sample, thresh)设置构造金字塔的参数,而带图像的构造函数另外多加了一句setIntImage(img)用来设置当前图像。
//! Save the parameters
void FastHessian::saveParameters(const int octaves, const int intervals,
const int init_sample, const float thresh)
{
// Initialise variables with bounds-checked values
this
->octaves =
(octaves > 0 && octaves <= 4 ? octaves : OCTAVES);
this
->intervals =
(intervals > 0 && intervals <= 4 ? intervals : INTERVALS);
this
->init_sample =
(init_sample > 0 && init_sample <= 6 ? init_sample : INIT_SAMPLE);
this
->thresh = (thresh >= 0 ? thresh : THRES);
}
//! Set or re-set the integral image source
void FastHessian::setIntImage(IplImage *img)
{
// Change the source image
this
->img = img;
i_height = img->height;
i_width = img->width;
}
由于在h头文件中已设置
static const int OCTAVES = 5;
//组数
static const int INTERVALS = 4;
//每组层数
static const float THRES = 0.0004f;
//阈值
static const int INIT_SAMPLE = 2;
//初始采样因子
所以 saveParameters的作用就是调整参数,以防过大或过小。
FastHessian::getIpoints()提取兴趣点:
//! Find the image features and write into vector of features
void FastHessian::getIpoints()
{
// filter index map
static const int filter_map [OCTAVES][INTERVALS] = {{0,1,2,3}, {1,3,4,5}, {3,5,6,7}, {5,7,8,9}, {7,9,10,11}};
// Clear the vector of exisiting ipts
ipts.clear();
// Build the response map
buildResponseMap();
// Get the response layers
...<br> ...
}
首先初始化filter_map,清空标记特征点的ipts结构体。
创建高斯平滑层函数参数ResponseMap(),大小与论文所给完全一致,
// Oct1: 9, 15, 21, 27
// Oct2: 15, 27, 39, 51
// Oct3: 27, 51, 75, 99
// Oct4: 51, 99, 147,195
// Oct5: 99, 195,291,387
这些都是每组模板的大小,每组间隔递增,6,12,24,48,96 。ResponseMap这个结构体向量包含4个参数ResponseLayer(int width, int height, int step, int filter)定义在responsemap.h里面,其中width和height等于实际图像大小除以step(step初始值为2),而filter则是滤波器半径。
然后使用buildResponseLayer(responseMap[i])对图像处理后将数据存放在responses和laplacian两个数组里面。
void FastHessian::buildResponseLayer(ResponseLayer *rl)
{
float *responses = rl->responses;
// response storage
unsigned char *laplacian = rl->laplacian;
// laplacian sign storage
int step = rl->step;
// step size for this filter 滤波器尺度因子
int b = (rl->filter - 1) / 2;
// border for this filter 滤波器边界
int l = rl->filter / 3;
// lobe for this filter (filter size / 3)
int w = rl->filter;
// filter size 滤波器大小
float inverse_area = 1.f/(w*w);
// normalisation factor 标准化因子
float Dxx, Dyy, Dxy;
for
(int r, c, ar = 0, index = 0; ar < rl->height; ++ar)
{
for
(int ac = 0; ac < rl->width; ++ac, index++)
{
// get the image coordinates
r = ar * step;
c = ac * step;
// Compute response components
Dxx = BoxIntegral(img, r - l + 1, c - b, 2*l - 1, w)
- BoxIntegral(img, r - l + 1, c - l / 2, 2*l - 1, l)*3;
Dyy = BoxIntegral(img, r - b, c - l + 1, w, 2*l - 1)
- BoxIntegral(img, r - l / 2, c - l + 1, l, 2*l - 1)*3;
Dxy = + BoxIntegral(img, r - l, c + 1, l, l)
+ BoxIntegral(img, r + 1, c - l, l, l)
- BoxIntegral(img, r - l, c - l, l, l)
- BoxIntegral(img, r + 1, c + 1, l, l);
// Normalise the filter responses with respect to their size
Dxx *= inverse_area;
Dyy *= inverse_area;
Dxy *= inverse_area;
// Get the determinant of hessian response & laplacian sign
responses[index] = (Dxx * Dyy - 0.81f * Dxy * Dxy);
laplacian[index] = (Dxx + Dyy >= 0 ? 1 : 0);
#ifdef RL_DEBUG
// create list of the image coords for each response
rl->coords.push_back(std::make_pair<int,int>(r,c));
#endif
}
}
}
其中计算Dxy和Dyy的示意图如下,其他的Dxx(Dyy的转置)读者自己参考。【此时w=9,l=9/3=3,对应于右图的总计算区域高度和宽度2*l-1】
圆点为当前点,将红色方形区域1内的积分值减去绿色方形2区域内的积分值=Dxy*w2 绿色方形区域2内的积分值减去2*红色色方形区域1内的积分值=Dyy*w2 (==用一整块区域-3*红色区域)
最后将计算后的结果存放到ResponseLayer里面的response和laplacian一维数组里,数组的大小即为ResponseLayer->width * ResponseLayer->width。
这样就计算出了每一层的所有像素点的det(Happrox)=Dxx*Dyy-(0.9*Dxy)2,下面开始判断当前点是否是极值点。
③根据上一步计算每层的每个点的det(Happrox),判断极值点。
在fasthessian.cpp里面找到getIpoints(),下面开始抽取每组(共octaves组)相邻的3层(共有4层,每组有2个这样层的满足):
ResponseLayer *b, *m, *t;
for
(int o = 0; o < octaves; ++o)
for
(int i = 0; i <= 1; ++i)
{
b = responseMap.at(filter_map[o][i]);
m = responseMap.at(filter_map[o][i+1]);
t = responseMap.at(filter_map[o][i+2]);
// loop over middle response layer at density of the most
// sparse layer (always top), to find maxima across scale and space<br> //总是在最上层去寻找极大值点
for
(int r = 0; r < t->height; ++r)
{
for
(int c = 0; c < t->width; ++c)
{
if
(isExtremum(r, c, t, m, b))
{
interpolateExtremum(r, c, t, m, b);
}
}
}
}
在判断的时候用到了isExtremum()和interpolateExtremum()子函数,在当前cpp内找到它们,并分析。
//! Non Maximal Suppression function
int FastHessian::isExtremum(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b)
{
// bounds check
int layerBorder = (t->filter + 1) / (2 * t->step);
//边界值,若当前点与边界的距离小于此值,则无法确定邻域,顾认为当前点不是极值点
if
(r <= layerBorder || r >= t->height - layerBorder || c <= layerBorder || c >= t->width - layerBorder)
return
0;
// check the candidate point in the middle layer is above thresh
float candidate = m->getResponse(r, c, t);
if
(candidate < thresh)
//小于某一阈值也认定为不是极值点
return
0;
for
(int rr = -1; rr <=1; ++rr)
{
for
(int cc = -1; cc <=1; ++cc)
{
// if any response in 3x3x3 is greater candidate not maximum<br> //只要3x3x3邻域存在一点大于当前点的response值,只可认定该点非极值点
if
(
t->getResponse(r+rr, c+cc) >= candidate ||
((rr != 0 || cc != 0) && m->getResponse(r+rr, c+cc, t) >= candidate) ||
b->getResponse(r+rr, c+cc, t) >= candidate
)
//优先级&&>||
return
0;
}
}
return
1;
}
大家务必注意,由于各层大小不一致,顾在比较大小的时候,要统一到统一尺度下,在getResponse()里面有所体现,即先计算两者尺度之比,比如尺度之比为2,最上层当前点为(20,25),那么需要比较的紧邻下层点为(40,50)而不是下层的(20,25)。再看若当前点为极值点,那么调用interpolateExtremum():
//! Interpolate scale-space extrema to subpixel accuracy to form an image feature.
void FastHessian::interpolateExtremum(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b)
{
// get the step distance between filters
// check the middle filter is mid way between top and bottom
int filterStep = (m->filter - b->filter);
assert(filterStep > 0 && t->filter - m->filter == m->filter - b->filter);
// Get the offsets to the actual location of the extremum
double xi = 0, xr = 0, xc = 0;
interpolateStep(r, c, t, m, b, &xi, &xr, &xc );
// If point is sufficiently close to the actual extremum
if
( fabs( xi ) < 0.5f && fabs( xr ) < 0.5f && fabs( xc ) < 0.5f )
{
Ipoint ipt;
ipt.x = static_cast<float>((c + xc) * t->step);
ipt.y = static_cast<float>((r + xr) * t->step);
ipt.scale = static_cast<float>((0.1333f) * (m->filter + xi * filterStep));
ipt.laplacian = static_cast<int>(m->getLaplacian(r,c,t));
ipts.push_back(ipt);
}
}
本函数实现功能,利用插值精确计算极值点在原图像中的位置,并保存在ipt中。
打开interpolateStep,其中deriv3D是求当前点的3的方向上的一阶导,hessian3D是求当前点的3维hessian二阶导,最后计算出3个方向的偏差值xi,xr,xc .
void FastHessian::interpolateStep(int r, int c, ResponseLayer *t, ResponseLayer *m, ResponseLayer *b,
double* xi, double* xr, double* xc )
{
CvMat* dD, * H, * H_inv, X;
double x[3] = { 0 };
dD = deriv3D( r, c, t, m, b );
H = hessian3D( r, c, t, m, b );
H_inv = cvCreateMat( 3, 3, CV_64FC1 );
cvInvert( H, H_inv, CV_SVD );
//用svd法求逆矩阵H_inv
cvInitMatHeader( &X, 3, 1, CV_64FC1, x, CV_AUTOSTEP );
//新建X
cvGEMM( H_inv, dD, -1, NULL, 0, &X, 0 );
//X=-dD*H_inv
cvReleaseMat( &dD );
cvReleaseMat( &H );
cvReleaseMat( &H_inv );
*xi = x[2];
*xr = x[1];
*xc = x[0];
}
下面开始在特征点周围提取特征描述符
④ 提取特征描述符
转到surf.cpp里寻找getDescriptors(),由于upright初始化设置为false(为特征点分配主方向,并旋转找到邻域提取特征描述符),若为true,则直接提取邻域特征描述符。
//! Describe all features in the supplied vector
void Surf::getDescriptors(bool upright)
{
// Check there are Ipoints to be described
if
(!ipts.size())
return
;
// Get the size of the vector for fixed loop bounds
int ipts_size = (int)ipts.size();
if
(upright)
{
// U-SURF loop just gets descriptors
for
(int i = 0; i < ipts_size; ++i)
{
// Set the Ipoint to be described
index = i;
// Extract upright (i.e. not rotation invariant) descriptors
getDescriptor(
true
);
}
}
else
{
// Main SURF-64 loop assigns orientations and gets descriptors
for
(int i = 0; i < ipts_size; ++i)
{
// Set the Ipoint to be described
index = i;
// Assign Orientations and extract rotation invariant descriptors
getOrientation();
getDescriptor(
false
);
}
}
}
其实两者区别在于提取特征描述符之前判断upright的时候是否需要多调用一句getOrientation()就是了。前者不考虑旋转,两幅图只是尺度有异,而后者还考虑了旋转不变性,更加全面。
几个地方:
1.如论文提出的采取r=6的圆域,共计包含109个像素点,大家可以算算,在此不多解释了。
2.像素点的方向由harrx和harry计算得到,相当于梯度公式里面的dx和dy,然后利用getAngle得到θ(分布在0~2∏,而不是简单的tan-1(harrx/harry)取值在0~∏)。
//! Calculate Haar wavelet responses in x direction
inline float Surf::haarX(int row, int column, int s)
{
return
BoxIntegral(img, row-s/2, column, s, s/2)
-1 * BoxIntegral(img, row-s/2, column-s/2, s, s/2);
}
//-------------------------------------------------------
//! Calculate Haar wavelet responses in y direction
inline float Surf::haarY(int row, int column, int s)
{
return
BoxIntegral(img, row, column-s/2, s/2, s)
-1 * BoxIntegral(img, row-s/2, column-s/2, s/2, s);
}
float Surf::getAngle(float X, float Y)
//计算每个点的方向角
{
if
(X > 0 && Y >= 0)
return
atan(Y/X);
if
(X < 0 && Y >= 0)
return
pi - atan(-Y/X);
if
(X < 0 && Y < 0)
return
pi + atan(Y/X);
if
(X > 0 && Y < 0)
return
2*pi - atan(-Y/X);
return
0;
}
图示如右: (黑色代表-1,白色为1,即白色区域积分值减去黑色区域积分值)
在getOrienation()里面resx和resy分别是上面计算出来的harrx和harry分别乘上高斯模板系数gauss25。
void Surf::getOrientation()
{
......
//将像素点按方向角分配到6个宽为60度的区间去,比如说可以将80度分配到ang1=90度,因为90-30<80<90+30
// loop slides pi/3 window around feature point
for
(ang1 = 0; ang1 < 2*pi; ang1+=0.15f) {
ang2 = ( ang1+pi/3.0f > 2*pi ? ang1-5.0f*pi/3.0f : ang1+pi/3.0f);
//保证ang1+60 不会大于360度
sumX = sumY = 0.f;
for
(unsigned int k = 0; k < Ang.size(); ++k)
//对于半径为6的圆邻域,这里面的Ang.size()=109
{
// get angle from the x-axis of the sample point
const float & ang = Ang[k];
// determine whether the point is within the window
if
(ang1 < ang2 && ang1 < ang && ang < ang2)
{
sumX+=resX[k];
sumY+=resY[k];
}
else
if
(ang2 < ang1 &&
((ang > 0 && ang < ang2) || (ang > ang1 && ang < 2*pi) ))
{
sumX+=resX[k];
sumY+=resY[k];
}
}
// if the vector produced from this window is longer than all
// previous vectors then this forms the new dominant direction
if
(sumX*sumX + sumY*sumY > max)
//寻找一个ang1使得角度在此区间的长度最大 ,然后计算出累加的resx和resy,得出的方向角
{
// store largest orientation
max = sumX*sumX + sumY*sumY;
orientation = getAngle(sumX, sumY);
}
}
// assign orientation of the dominant response vector
ipt->orientation = orientation;
}
好了,终于要开始提取特征描述符了哈~.~
void Surf::getDescriptor(bool bUpright)
{
int y, x, sample_x, sample_y, count=0;
int i = 0, ix = 0, j = 0, jx = 0, xs = 0, ys = 0;
float scale, *desc, dx, dy, mdx, mdy, co, si;
float gauss_s1 = 0.f, gauss_s2 = 0.f;
float rx = 0.f, ry = 0.f, rrx = 0.f, rry = 0.f, len = 0.f;
float cx = -0.5f, cy = 0.f;
//Subregion centers for the 4x4 gaussian weighting
Ipoint *ipt = &ipts[index];
scale = ipt->scale;
//尺度
x = fRound(ipt->x);
y = fRound(ipt->y);
//空间位置
desc = ipt->descriptor;
if
(bUpright)
{
//不需旋转的情况
co = 1;
si = 0;
}
else
{
//需要旋转调整选取邻域的情况
co = cos(ipt->orientation);
si = sin(ipt->orientation);
}
i = -8;
//Calculate descriptor for this interest point
while
(i < 12)
{
j = -8;
i = i-4;
cx += 1.f;
cy = -0.5f;
while
(j < 12)
{
dx=dy=mdx=mdy=0.f;
//特征向量的构成
cy += 1.f;
j = j - 4;
ix = i + 5;
//ix=i0+1,这里面i0和j0分别取得的值为-8,-3,2,7
jx = j + 5;
//iy=j0+1
xs = fRound(x + ( -jx*scale*si + ix*scale*co));
ys = fRound(y + ( jx*scale*co + ix*scale*si));
//fRound为4舍五入算法,最近邻插值寻找旋转对应点
for
(int k = i; k < i + 9; ++k)
{
for
(int l = j; l < j + 9; ++l)
{
//Get coords of sample point on the rotated axis
sample_x = fRound(x + (-l*scale*si + k*scale*co));
sample_y = fRound(y + ( l*scale*co + k*scale*si));
//Get the gaussian weighted x and y responses
gauss_s1 = gaussian(xs-sample_x,ys-sample_y,2.5f*scale);
rx = haarX(sample_y, sample_x, 2*fRound(scale));
ry = haarY(sample_y, sample_x, 2*fRound(scale));
//Get the gaussian weighted x and y responses on rotated axis
rrx = gauss_s1*(-rx*si + ry*co);
rry = gauss_s1*(rx*co + ry*si);
dx += rrx;
dy += rry;
mdx += fabs(rrx);
mdy += fabs(rry);
}
}
//Add the values to the descriptor vector
gauss_s2 = gaussian(cx-2.0f,cy-2.0f,1.5f);
desc[count++] = dx*gauss_s2;
desc[count++] = dy*gauss_s2;
desc[count++] = mdx*gauss_s2;
desc[count++] = mdy*gauss_s2;
len += (dx*dx + dy*dy + mdx*mdx + mdy*mdy) * gauss_s2*gauss_s2;
j += 9;
}
i += 9;
}
//Convert to Unit Vector 特征向量归一化
len = sqrt(len);
for
(int i = 0; i < 64; ++i)
desc[i] /= len;
}
其中i和j分别取的值为-8,-3,2,7,很明显i,j确定的邻域为7-(-8)+1=16,16x16的邻域,旋转对应的在原图像的点位置为
xs = fRound(x + ( -jx*scale*si + ix*scale*co)); ys = fRound(y + ( jx*scale*co + ix*scale*si));
co = cos(ipt->orientation); si = sin(ipt->orientation); 【ipt->orientation为特征点的方向角】
共有 4x4个子块(9x9=81个像素点),每个子块分别计算了其中16个dx,dy,|dx|,|dy|之和(当然还要考虑高斯滤波权重系数),则最终的特征描述符为4x4x4=64维向量。
main.cpp内mainImage函数内部drawIpoints(img, ipts)就不用再做解释了吧。
搞了一天,终于搞完了~~
来个截图~~
转载自:blue_lghttp://www.cnblogs.com/blue-lg/archive/2012/07/20/2600436.html
- surf源码解析,附加源码下载链接
- SURF源码解析
- 图书、源码下载链接
- Sping源码解析-源码下载
- SURF 源码分析
- SURF算法源码分析
- 【特征匹配】SURF原理与源码解析(一)
- 【特征匹配】SURF原理与源码解析(二)
- 磁力链接搜索引擎源码下载
- spring源码解析博客链接
- OpenCV245之SURF源码分析
- CCF 试题解析附加源码(2016-12)
- Android各版本源码下载链接
- repo下载国内链接android源码
- repo下载国内链接android源码
- repo下载国内链接android源码
- Mac下载Android7.1源码全过程 (附云盘下载链接)
- Tomcat源码解析(一)下载源码与导入eclipse
- SimpleWebRTC
- 导入JavaWeb工程出现很多报错
- IE8打开时自动弹出“开发人员工具”窗口的解决方案
- 《Python核心编程》第二版课后习题——第二章 (记录自己做的习题,可能有误)
- Linux2.6内核中链表的实现
- surf源码解析,附加源码下载链接
- 在Maven中怎么配置外部Jar
- HTTP协议头部详解
- 基本网络设备简介
- C语言笔记2
- context-param元素 与listener解释
- 非常实用的织梦dede所有标签调用方法大全
- iOS中的系统通知
- ios 禁止横屏