opencv 3.0 相机校准 Calibration Calib

来源:互联网 发布:ubuntu c ide 编辑:程序博客网 时间:2024/05/18 19:42

最近使用opencv3.0

发现宏定义居然改变了  不过有规律可循。

大部分是将之前的 CV_去掉就好了

 err = norm(Mat(imagePoints[i]), Mat(imagePoints2), CV_L2);

CV_L2 改为 NORM_L2

cornerSubPix( viewGray, pointBuf, Size(11,11),                        Size(-1,-1), TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 ));

改为

//                     cornerSubPix( viewGray, pointBuf, Size(11,11),//                         Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.1 ));


 

但是为了校准的精度, 我将只使用自适应阈值,并将亚像素点算法去掉了

found = findChessboardCorners( view, s.boardSize, pointBuf,                CALIB_CB_ADAPTIVE_THRESH /*| CALIB_CB_FAST_CHECK | CALIB_CB_NORMALIZE_IMAGE*/);

 

                    Mat viewGray;                    cvtColor(view, viewGray, COLOR_BGR2GRAY);//                     imshow("Gray",viewGray);//                     cornerSubPix( viewGray, pointBuf, Size(11,11),//                         Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.1 ));

至此可以完成相机校准了

得到YML用于图像去畸变。

 

先说这个去畸变函数,主要有两个调用方式。

1. 显示整张图去畸变的结果,边缘有黑边

    initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),        getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0),        imageSize, CV_16SC2, map1, map2);

2.将边缘裁剪后的最大图像,适合于三维重建坐标映射使用

    initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),Mat(),imageSize, CV_16SC2, map1, map2);

 

处理完map映射之后

    remap(img, undistorted, map1, map2, INTER_LINEAR);


最后是三维坐标点重映射,图像坐标系->世界坐标系

//计算世界坐标系Point3d cal_worldpoint(Point3d img_point, Mat cameraMatrix){    Point3d world_point;    double fx,fy,cx,cy,depth;    fx = cameraMatrix.ptr<double>(0)[0];    fy = cameraMatrix.ptr<double>(1)[1];    cx = cameraMatrix.ptr<double>(0)[2];    cy = cameraMatrix.ptr<double>(1)[2];    depth = img_point.z;    world_point.x = depth*(img_point.x - cx)/fx;    world_point.y = depth*(img_point.y - cy)/fy;    world_point.z = depth;    return world_point;}

 

0 0
原创粉丝点击