OpenCV Tutorial 9 - Chapter 10

来源:互联网 发布:淘宝店营销策划书范文 编辑:程序博客网 时间:2024/05/16 18:50
 ( 由于收到附件大小的限制,此篇的图片无法上传,请大家见谅,可登陆原网站查看:http://dasl.mem.drexel.edu/~noahKuntz/openCVTut9.html)

Author: Noah Kuntz (2009)
Contact: nk752@drexel.edu

Keywords: OpenCV, computer vision, image processing, optic flow, tracking, motion

My Vision Tutorials Index

This tutorial assumes the reader:
(1) Has a basic knowledge of Visual C++
(2) Has some familiarity with computer vision concepts
(3) Has read the previous tutorials in this series

The rest of the tutorial is presented as follows:

  • Step 1: Optic Flow
  • Step 2: Kalman Filter
  • Final Words

Important Note!

More information on the topics of these tutorials can be found in this book:Learning OpenCV: Computer Vision with the OpenCV Library

Step 1: Optic Flow(光流)


Optic Flow Example(光流示例)


Optic flow is a useful tool for tracking objects in motion(光流是一个有用的运动物体跟踪工具). The Lucas-kanade algorithm is easily the most popular method of tracking movement(卢卡斯-金出武雄算法是很容易的最流行的跟踪运动的方法). The LK tracker uses three assumptions, brightness constancy between the same pixels from one frame to the next, small movements between frames (requiring image pyramids to track larger movements), and spatial coherence, thats points near each other are on the same surface(卢卡斯-金出武雄算法跟踪使用三个假设,从一帧图像之间的相同像素,亮度恒定下,在帧之间存在小运动(需要较大的图像金字塔跟踪运动),和空间相干性的小变动,这就是彼此接近点位于同一平面). Then the basic concept of the tracker is to estimate the velocity of a moving pixel by the ratio of the derivative of the intensity over time divided by the derivative of the intensity over space(然后,跟踪的基本概念是由估计移动像素速度随着时间的推移比例,其发光强度衍生的衍生强度在空间划分). Check the book and the copious research on Lucas-Kanade for more technical detail of the method(为获得算法更多的技术细节,检查卢卡斯—金出武雄书和上面丰富的研究). The functions we use to perform this tracking arecvGoodFeaturesToTrack to find the features,cvFindCornerSubPix to refine their location, andcvCalcOpticalFlowPyrLK to do the actual optic flow calculations(我们使用的用来执行此跟踪的功能是用cvGoodFeaturesToTrack来找到特点,用cvFindCornerSubPix来完善他们的位置,并用cvCalcOpticalFlowPyrLK做光流的实际计算). And then lastly we can draw the flow vectors on a combination of the two images to show that it works(最后,我们可以借鉴的两个图像组合的流动载体,以证明它工作). Here is the code(以下是代码):


 

const int MAX_CORNERS = 500;

 

int _tmain(int argc, _TCHAR* argv[])

{

        // Load two images and allocate other structures

        IplImage* imgA = cvLoadImage("image0.png", CV_LOAD_IMAGE_GRAYSCALE);

        IplImage* imgB = cvLoadImage("image1.png", CV_LOAD_IMAGE_GRAYSCALE);

 

        CvSize img_sz = cvGetSize( imgA );

        int win_size = 15;

 

        IplImage* imgC = cvLoadImage("OpticalFlow1.png", CV_LOAD_IMAGE_UNCHANGED);

 

        // Get the features for tracking

        IplImage* eig_image = cvCreateImage( img_sz, IPL_DEPTH_32F, 1 );

        IplImage* tmp_image = cvCreateImage( img_sz, IPL_DEPTH_32F, 1 );

 

        int corner_count = MAX_CORNERS;

        CvPoint2D32f* cornersA = new CvPoint2D32f[ MAX_CORNERS ];

 

        cvGoodFeaturesToTrack( imgA, eig_image, tmp_image, cornersA, &corner_count,

               0.05, 5.0, 0, 3, 0, 0.04 );

 

        cvFindCornerSubPix( imgA, cornersA, corner_count, cvSize( win_size, win_size ),

               cvSize( -1, -1 ), cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.03 ) );

 

        // Call Lucas Kanade algorithm

        char features_found[ MAX_CORNERS ];

        float feature_errors[ MAX_CORNERS ];

 

        CvSize pyr_sz = cvSize( imgA->width+8, imgB->height/3 );

 

        IplImage* pyrA = cvCreateImage( pyr_sz, IPL_DEPTH_32F, 1 );

        IplImage* pyrB = cvCreateImage( pyr_sz, IPL_DEPTH_32F, 1 );

 

        CvPoint2D32f* cornersB = new CvPoint2D32f[ MAX_CORNERS ];

 

        cvCalcOpticalFlowPyrLK( imgA, imgB, pyrA, pyrB, cornersA, cornersB, corner_count,

               cvSize( win_size, win_size ), 5, features_found, feature_errors,

                cvTermCriteria( CV_TERMCRIT_ITER | CV_TERMCRIT_EPS, 20, 0.3 ), 0 );

 

        // Make an image of the results

 

        for( int i=0; i 550 ){

                       printf("Error is %f/n", feature_errors[i]);

                       continue;

               }

               printf("Got it/n");

               CvPoint p0 = cvPoint( cvRound( cornersA[i].x ), cvRound( cornersA[i].y ) );

               CvPoint p1 = cvPoint( cvRound( cornersB[i].x ), cvRound( cornersB[i].y ) );

               cvLine( imgC, p0, p1, CV_RGB(255,0,0), 2 );

        }

 

        cvNamedWindow( "ImageA", 0 );

        cvNamedWindow( "ImageB", 0 );

        cvNamedWindow( "LKpyr_OpticalFlow", 0 );

 

        cvShowImage( "ImageA", imgA );

        cvShowImage( "ImageB", imgB );

        cvShowImage( "LKpyr_OpticalFlow", imgC );

 

        cvWaitKey(0);

 

        return 0;

}

 


Step 2: Kalman Filter(卡尔曼滤波)


Kalman Filter Prediction Example(卡尔曼滤波预测示例)


The Kalman filter is another powerful tool for analyzing motion(卡尔曼滤波是用于分析运动的另一种强大的工具). The filter can be used to predict the real position of something being tracked at a better accuracy than raw sensor data(该过滤器可用于预测某事的真实位置比原始传感器数据跟踪更加精确). The Kalman filter uses the history of measurements to build a model of the state of the system that maximizes the probability for the position of the target based on the past measurements(卡尔曼滤波使用的测量的历史来建立一个根据以往的测量状态模型,最大限度地提高了目标位置概率的系统). Use the book or other resources to examine the math behind this(使用书籍或其他资源,来检查这背后的数学研究). There are good details in the comments here, so look at the code to see how to use the Kalman filter in OpenCV(在这里的评论中有好的细节,所以看看代码,看看如何在OpenCV中使用卡尔曼滤波):


 

 

int _tmain(int argc, _TCHAR* argv[])

{

        // Initialize Kalman filter object, window, number generator, etc

        cvNamedWindow( "Kalman", 1 );

        CvRandState rng;

        cvRandInit( &rng, 0, 1, -1, CV_RAND_UNI );

 

        IplImage* img = cvCreateImage( cvSize(500,500), 8, 3 );

        CvKalman* kalman = cvCreateKalman( 2, 1, 0 );

 

        // State is phi, delta_phi - angle and angular velocity

        // Initialize with random guess

        CvMat* x_k = cvCreateMat( 2, 1, CV_32FC1 );

        cvRandSetRange( &rng, 0, 0.1, 0 );

        rng.disttype = CV_RAND_NORMAL;

        cvRand( &rng, x_k );

 

        // Process noise

        CvMat* w_k = cvCreateMat( 2, 1, CV_32FC1 );

 

        // Measurements, only one parameter for angle

        CvMat* z_k = cvCreateMat( 1, 1, CV_32FC1 );

        cvZero( z_k );

 

        // Transition matrix F describes model parameters at and k and k+1

        const float F[] = { 1, 1, 0, 1 };

        memcpy( kalman->transition_matrix->data.fl, F, sizeof(F));

 

        // Initialize other Kalman parameters

        cvSetIdentity( kalman->measurement_matrix, cvRealScalar(1) );

        cvSetIdentity( kalman->process_noise_cov, cvRealScalar(1e-5) );

        cvSetIdentity( kalman->measurement_noise_cov, cvRealScalar(1e-1) );

        cvSetIdentity( kalman->error_cov_post, cvRealScalar(1) );

 

        // Choose random initial state

        cvRand( &rng, kalman->state_post );

 

        // Make colors

        CvScalar yellow = CV_RGB(255,255,0);

        CvScalar white = CV_RGB(255,255,255);

        CvScalar red = CV_RGB(255,0,0);

 

        while( 1 ){

               // Predict point position

               const CvMat* y_k = cvKalmanPredict( kalman, 0 );

 

               // Generate Measurement (z_k)

               cvRandSetRange( &rng, 0, sqrt( kalman->measurement_noise_cov->data.fl[0] ), 0 );                      

               cvRand( &rng, z_k );

               cvMatMulAdd( kalman->measurement_matrix, x_k, z_k, z_k );

              

               // Plot Points

               cvZero( img );

               // Yellow is observed state

               cvCircle( img,

                       cvPoint( cvRound(img->width/2 + img->width/3*cos(z_k->data.fl[0])),

                       cvRound( img->height/2 - img->width/3*sin(z_k->data.fl[0])) ),

                       4, yellow );

               // White is the predicted state via the filter

               cvCircle( img,

                       cvPoint( cvRound(img->width/2 + img->width/3*cos(y_k->data.fl[0])),

                       cvRound( img->height/2 - img->width/3*sin(y_k->data.fl[0])) ),

                       4, white, 2 );

               // Red is the real state

               cvCircle( img,

                       cvPoint( cvRound(img->width/2 + img->width/3*cos(x_k->data.fl[0])),

                       cvRound( img->height/2 - img->width/3*sin(x_k->data.fl[0])) ),

                       4, red );

               cvShowImage( "Kalman", img );

 

               // Adjust Kalman filter state

               cvKalmanCorrect( kalman, z_k );

 

               // Apply the transition matrix F and apply "process noise" w_k

               cvRandSetRange( &rng, 0, sqrt( kalman->process_noise_cov->data.fl[0] ), 0 );

               cvRand( &rng, w_k );

               cvMatMulAdd( kalman->transition_matrix, x_k, w_k, x_k );

 

               // Exit on esc key

               if( cvWaitKey( 100 ) == 27 )

                       break;

        }

 

        return 0;

}

 



 

Final Words(结束语)

This tutorial's objective was to show how to implement Lucas-Kanade optic flow and a Kalman filter for the purposes of tracking motion(本教程的目标是展示如何实现卢卡斯金出武雄光流和卡尔曼滤波跟踪运动的目的).

Click here to email me.
Click here to return to my Tutorials page.

 

原创粉丝点击