【双目视觉探索路3】分析整理Learning OpenCV3书中立体标定、校正以及对应代码(1)之总体
来源:互联网 发布:安卓软件中心 编辑:程序博客网 时间:2024/04/30 15:38
----------------------------------------------------------------------------------------------
*******由于本部分内容太长,建议同时打开两个窗口进行阅读***********
----------------------------------------------------------------------------------------------
其实上一篇还没写完,但网上各种参考资源交杂于OpenCV1.0和OpenCV3之间,看的脑袋有点乱,个人不太喜欢先扔一大段原理,然后没有程序示意的方式;倾向于简单粗暴的直接展示的模式
想了下还是先分析书上的源码可能比较靠谱。
配置
VS2017+OpenCV3.3
配置方法参见:OpenCV3.3.0在Visual Studio 2017上的配置
本部分内容为
基本原理+书上代码
书上down下来的源代码有点错误,在调试的过程中不断更改
基本原理
回顾一下立体视觉的基本内容
代码
代码来源:书上Stereo Calibration,Rectification and Correspondence Code Example
Raw Data
包含一个读取名称流的.txt文件,以及畸变很大的左右棋盘格图像各14幅
书上的代码样例,进行一段一段的分析:
#include <opencv2/opencv.hpp>#include <iostream>#include <string.h>#include <stdlib.h>#include <stdio.h>#include <math.h>using namespace std;void help(char *argv[]) { cout << "\n\nExample 19-3. Stereo calibration, rectification, and " "correspondence" << "\n Reads in list of locations of a sequence of checkerboard " "calibration" << "\n objects from a left,right stereo camera pair. Calibrates, " "rectifies and then" << "\n does stereo correspondence." << "\n" << "\n This program will run on default parameters assuming you " "created a build directory" << "\n directly below the Learning-OpenCV-3 directory and are " "running programs there. NOTE: the list_of_stereo_pairs> must" << "\n give the full path name to the left right images, in " "alternating" << "\n lines: left image, right image, one path/filename per line, see" << "\n stereoData/example_19-03_list.txt file, you can comment out " "lines" << "\n there by starting them with #." << "\n" << "\nDefault Call (with parameters: board_w = 9, board_h = 6, list = " "../stereoData_19-03_list.txt):" << "\n" << argv[0] << "\n" << "\nManual call:" << "\n" << argv[0] << " [<board_w> <board_h> <path/list_of_stereo_pairs>]" << "\n\n PRESS ANY KEY TO STEP THROUGH RESULTS AT EACH STAGE." << "\n" << endl;}static void StereoCalib(const char *imageList, int nx, int ny, bool useUncalibrated) { bool displayCorners = true; bool showUndistorted = true; bool isVerticalStereo = false; // horiz or vert cams const int maxScale = 1; const float squareSize = 1.f; // actual square size FILE *f = fopen(imageList, "rt"); int i, j, lr; int N = nx * ny; cv::Size board_sz = cv::Size(nx, ny); vector<string> imageNames[2]; vector<cv::Point3f> boardModel; vector<vector<cv::Point3f> > objectPoints; vector<vector<cv::Point2f> > points[2]; vector<cv::Point2f> corners[2]; bool found[2] = {false, false}; cv::Size imageSize; // READ IN THE LIST OF CIRCLE GRIDS: // if (!f) { cout << "Cannot open file " << imageList << endl; return; } for (i = 0; i < ny; i++) for (j = 0; j < nx; j++) boardModel.push_back( cv::Point3f((float)(i * squareSize), (float)(j * squareSize), 0.f)); i = 0; for (;;) { char buf[1024]; lr = i % 2; if (lr == 0) found[0] = found[1] = false; if (!fgets(buf, sizeof(buf) - 3, f)) break; size_t len = strlen(buf); while (len > 0 && isspace(buf[len - 1])) buf[--len] = '\0'; if (buf[0] == '#') continue; cv::Mat img = cv::imread(buf, 0); if (img.empty()) break; imageSize = img.size(); imageNames[lr].push_back(buf); i++; // If we did not find board on the left image, // it does not make sense to find it on the right. // if (lr == 1 && !found[0]) continue; // Find circle grids and centers therein: for (int s = 1; s <= maxScale; s++) { cv::Mat timg = img; if (s > 1) resize(img, timg, cv::Size(), s, s, cv::INTER_CUBIC); // Just as example, this would be the call if you had circle calibration // boards ... // found[lr] = cv::findCirclesGrid(timg, cv::Size(nx, ny), // corners[lr], // cv::CALIB_CB_ASYMMETRIC_GRID | // cv::CALIB_CB_CLUSTERING); //...but we have chessboards in our images found[lr] = cv::findChessboardCorners(timg, board_sz, corners[lr]); if (found[lr] || s == maxScale) { cv::Mat mcorners(corners[lr]); mcorners *= (1. / s); } if (found[lr]) break; } if (displayCorners) { cout << buf << endl; cv::Mat cimg; cv::cvtColor(img, cimg, cv::COLOR_GRAY2BGR); // draw chessboard corners works for circle grids too cv::drawChessboardCorners(cimg, cv::Size(nx, ny), corners[lr], found[lr]); cv::imshow("Corners", cimg); if ((cv::waitKey(0) & 255) == 27) // Allow ESC to quit exit(-1); } else cout << '.'; if (lr == 1 && found[0] && found[1]) { objectPoints.push_back(boardModel); points[0].push_back(corners[0]); points[1].push_back(corners[1]); } } fclose(f); // CALIBRATE THE STEREO CAMERAS cv::Mat M1 = cv::Mat::eye(3, 3, CV_64F); cv::Mat M2 = cv::Mat::eye(3, 3, CV_64F); cv::Mat D1, D2, R, T, E, F; cout << "\nRunning stereo calibration ...\n"; cv::stereoCalibrate( objectPoints, points[0], points[1], M1, D1, M2, D2, imageSize, R, T, E, F, cv::CALIB_FIX_ASPECT_RATIO | cv::CALIB_ZERO_TANGENT_DIST | cv::CALIB_SAME_FOCAL_LENGTH, cv::TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 100, 1e-5)); cout << "Done! Press any key to step through images, ESC to exit\n\n"; // CALIBRATION QUALITY CHECK // because the output fundamental matrix implicitly // includes all the output information, // we can check the quality of calibration using the // epipolar geometry constraint: m2^t*F*m1=0 vector<cv::Point3f> lines[2]; double avgErr = 0; int nframes = (int)objectPoints.size(); for (i = 0; i < nframes; i++) { vector<cv::Point2f> &pt0 = points[0][i]; vector<cv::Point2f> &pt1 = points[1][i]; cv::undistortPoints(pt0, pt0, M1, D1, cv::Mat(), M1); cv::undistortPoints(pt1, pt1, M2, D2, cv::Mat(), M2); cv::computeCorrespondEpilines(pt0, 1, F, lines[0]); cv::computeCorrespondEpilines(pt1, 2, F, lines[1]); for (j = 0; j < N; j++) { double err = fabs(pt0[j].x * lines[1][j].x + pt0[j].y * lines[1][j].y + lines[1][j].z) + fabs(pt1[j].x * lines[0][j].x + pt1[j].y * lines[0][j].y + lines[0][j].z); avgErr += err; } } cout << "avg err = " << avgErr / (nframes * N) << endl; // COMPUTE AND DISPLAY RECTIFICATION // if (showUndistorted) { cv::Mat R1, R2, P1, P2, map11, map12, map21, map22; // IF BY CALIBRATED (BOUGUET'S METHOD) // if (!useUncalibrated) { stereoRectify(M1, D1, M2, D2, imageSize, R, T, R1, R2, P1, P2, cv::noArray(), 0); isVerticalStereo = fabs(P2.at<double>(1, 3)) > fabs(P2.at<double>(0, 3)); // Precompute maps for cvRemap() initUndistortRectifyMap(M1, D1, R1, P1, imageSize, CV_16SC2, map11, map12); initUndistortRectifyMap(M2, D2, R2, P2, imageSize, CV_16SC2, map21, map22); } // OR ELSE HARTLEY'S METHOD // else { // use intrinsic parameters of each camera, but // compute the rectification transformation directly // from the fundamental matrix vector<cv::Point2f> allpoints[2]; for (i = 0; i < nframes; i++) { copy(points[0][i].begin(), points[0][i].end(), back_inserter(allpoints[0])); copy(points[1][i].begin(), points[1][i].end(), back_inserter(allpoints[1])); } cv::Mat F = findFundamentalMat(allpoints[0], allpoints[1], cv::FM_8POINT); cv::Mat H1, H2; cv::stereoRectifyUncalibrated(allpoints[0], allpoints[1], F, imageSize, H1, H2, 3); R1 = M1.inv() * H1 * M1; R2 = M2.inv() * H2 * M2; // Precompute map for cvRemap() // cv::initUndistortRectifyMap(M1, D1, R1, P1, imageSize, CV_16SC2, map11, map12); cv::initUndistortRectifyMap(M2, D2, R2, P2, imageSize, CV_16SC2, map21, map22); } // RECTIFY THE IMAGES AND FIND DISPARITY MAPS // cv::Mat pair; if (!isVerticalStereo) pair.create(imageSize.height, imageSize.width * 2, CV_8UC3); else pair.create(imageSize.height * 2, imageSize.width, CV_8UC3); // Setup for finding stereo corrrespondences // cv::Ptr<cv::StereoSGBM> stereo = cv::StereoSGBM::create( -64, 128, 11, 100, 1000, 32, 0, 15, 1000, 16, cv::StereoSGBM::MODE_HH); for (i = 0; i < nframes; i++) { cv::Mat img1 = cv::imread(imageNames[0][i].c_str(), 0); cv::Mat img2 = cv::imread(imageNames[1][i].c_str(), 0); cv::Mat img1r, img2r, disp, vdisp; if (img1.empty() || img2.empty()) continue; cv::remap(img1, img1r, map11, map12, cv::INTER_LINEAR); cv::remap(img2, img2r, map21, map22, cv::INTER_LINEAR); if (!isVerticalStereo || !useUncalibrated) { // When the stereo camera is oriented vertically, // Hartley method does not transpose the // image, so the epipolar lines in the rectified // images are vertical. Stereo correspondence // function does not support such a case. stereo->compute(img1r, img2r, disp); cv::normalize(disp, vdisp, 0, 256, cv::NORM_MINMAX, CV_8U); cv::imshow("disparity", vdisp); } if (!isVerticalStereo) { cv::Mat part = pair.colRange(0, imageSize.width); cvtColor(img1r, part, cv::COLOR_GRAY2BGR); part = pair.colRange(imageSize.width, imageSize.width * 2); cvtColor(img2r, part, cv::COLOR_GRAY2BGR); for (j = 0; j < imageSize.height; j += 16) cv::line(pair, cv::Point(0, j), cv::Point(imageSize.width * 2, j), cv::Scalar(0, 255, 0)); } else { cv::Mat part = pair.rowRange(0, imageSize.height); cv::cvtColor(img1r, part, cv::COLOR_GRAY2BGR); part = pair.rowRange(imageSize.height, imageSize.height * 2); cv::cvtColor(img2r, part, cv::COLOR_GRAY2BGR); for (j = 0; j < imageSize.width; j += 16) line(pair, cv::Point(j, 0), cv::Point(j, imageSize.height * 2), cv::Scalar(0, 255, 0)); } cv::imshow("rectified", pair); if ((cv::waitKey() & 255) == 27) break; } }}////Default Call (with parameters: board_w = 9, board_h = 6, list =// ../stereoData_19-03_list.txt)://./example_19-03////Manual call://./example_19-03 [<board_w> <board_h> <path/list_of_stereo_pairs>]//// Press any key to step through results, ESC to exit//int main(int argc, char **argv) { help(argv); int board_w = 9, board_h = 6; const char *board_list = "../stereoData/example_19-03_list.txt"; if (argc == 4) { board_list = argv[1]; board_w = atoi(argv[2]); board_h = atoi(argv[3]); } StereoCalib(board_list, board_w, board_h, true); return 0;}
分析
功能分析
我们这段代码的目的是实现标定及校正14对立体图像对的功能。
先决条件
如基本原理所述,双目立体视觉系统必须前向平行。
其原因:一可以增加立体视觉图相对的重叠区域,二可以减少重投影时带来的畸变。
因为OpenCV还没有校正极点在图像帧内部的能力(ps:极点在图像内部说明之间的夹角很大),所以在实际拍摄中更需要注意这一点。
可能也有解决方案,Learning OpenCV3给出了相应的引用文献 [Pollefeys99b]。
上述两部分的思维导图整理如下:
程序结构
立体标定、校正程序的实现
共包括五个步骤
1,读入立体图像对(stereo image pairs)并获得亚像素精度级别的位置信息。
2,调用stereoCalibrate()函数进行立体标定获得本征矩阵
3,评定精度(采用点与极线之间的距离进行评价)
4,校正图像(去除畸变,输出行对准图像)
5,由StereoSGBM获得视差图
具体详细展开内容如下
程序解读
主程序
调用StereoCalib()函数实现立体标定,输入参数依次为面板的名称、宽(board_w)和高(board_h)方向的角点数目(9,6)
由于对C++的IO过程不太熟,fopen函数在VS2017上会报错,因此在我调试过程中,重新编写了一个read.txt文档,把文件名写进去了
left01.jpgright01.jpgleft02.jpgright02.jpgleft03.jpgright03.jpgleft04.jpgright04.jpgleft05.jpgright05.jpgleft06.jpgright06.jpgleft07.jpgright07.jpgleft08.jpgright08.jpgleft09.jpgright09.jpgleft10.jpgright10.jpgleft11.jpgright11.jpgleft12.jpgright12.jpgleft13.jpgright13.jpgleft14.jpgright14.jpg
主程序改编如下
int main(int argc, char **argv) {help(argv);int board_w = 9, board_h = 6;const char *board_list = "read.txt";if (argc == 4) {board_list = argv[1];board_w = atoi(argv[2]);board_h = atoi(argv[3]);}StereoCalib(board_list, board_w, board_h, true);return 0;}
- 【双目视觉探索路3】分析整理Learning OpenCV3书中立体标定、校正以及对应代码(1)之总体
- 【双目视觉探索路5】分析整理Learning OpenCV3书中立体标定、校正以及对应代码(3)之SGBM算法
- 【双目视觉探索路4】分析整理Learning OpenCV3书中立体标定、校正以及对应代码(2)之部分验证
- 【立体视觉】双目立体标定与立体校正
- (七)立体标定与立体校正 【计算机视觉学习笔记--双目视觉几何框架系列】
- (七)立体标定与立体校正 【计算机视觉学习笔记--双目视觉几何框架系列】
- (七)立体标定与立体校正 【计算机视觉学习笔记--双目视觉几何框架系列】
- 双目视觉立体标定
- OpenCV双目视觉之立体校正
- 【双目视觉探索路1】立体视觉的基本原理
- OPENCV3.0 双目立体标定
- OPENCV3.0 双目立体标定
- OPENCV3.0 双目立体标定
- OpenCV 立体视觉详细解析(三)---立体标定和校正源码分析
- 立体视觉标定和校正
- 【OpenCV】双目测距(双目标定、双目校正和立体匹配)
- 【OpenCV】双目测距(双目标定、双目校正和立体匹配)
- 【OpenCV】双目测距(双目标定、双目校正和立体匹配)
- w25q32 制作个人字库的详细方法.
- ui-route子路由切换重复点击不重新加载问题
- nginx解决cookie跨域问题
- Java--集合--LinkedList
- ThreadLocal<T> Java线程局部变量
- 【双目视觉探索路3】分析整理Learning OpenCV3书中立体标定、校正以及对应代码(1)之总体
- Linux学习总结(44)——Linux操作系统基础知识
- BZOJ3714: [PA2014]Kuglarz
- 左偏树(可并堆)模板
- Wedry、捞偏门骗子无耻抄袭 识破圈钱骗局
- Spring cloud Zuul Filter 使用小经验
- 利用Java GUI 实现一个简易的用户管理系统
- 用好你的pycharm 需要知道这些操作(windows)
- 思维导图课堂——世界上解决问题最简单的方法