OpenCV实现SfM(二):多目三维重建

来源:互联网 发布:微博数据采集器 编辑:程序博客网 时间:2024/05/22 13:03

上一次学习了双目三维重建,这次来学习基于多目的三维重建。首先,为了简化问题,我们要做一个重要假设:用于多目重建的图像是有序的,即相邻图像的拍摄位置也是相邻的


求第三个相机的变换矩阵

由前面的文章我们知道,两个相机之间的变换矩阵可以通过findEssentialMat以及recoverPose函数来实现,设第一个相机的坐标系为世界坐标系,现在加入第三幅图像(相机),如何确定第三个相机(后面称为相机三)到到世界坐标系的变换矩阵呢?

最简单的想法,就是沿用双目重建的方法,即在第三幅图像和第一幅图像之间提取特征点,然后调用findEssentialMat和recoverPose。那么加入第四幅、第五幅,乃至更多呢?随着图像数量的增加,新加入的图像与第一幅图像的差异可能越来越大,特征点的提取变得异常困难,这时就不能再沿用双目重建的方法了。

那么能不能用新加入的图像和相邻图像进行特征匹配呢?比如第三幅与第二幅匹配,第四幅与第三幅匹配,以此类推。当然可以,但是这时就不能继续使用findEssentialMat和recoverPose来求取相机的变换矩阵了,因为这两个函数求取的是相对变换,比如相机三到相机二的变换,而我们需要的是相机三到相机一的变换。有人说,既然知道相机二到相机一的变换,又知道相机到三到相机二的变换,不就能求出相机三到相机一的变换吗?实际上,通过这种方式,你只能求出相机三到相机一的旋转变换(旋转矩阵R),而他们之间的位移向量T,是无法求出的。这是因为上面两个函数求出的位移向量,都是单位向量,丢失了相机之间位移的比例关系。

说了这么多,我们要怎么解决这些问题?现在请出本文的主角——solvePnP和solvePnPRansac。根据opencv的官方解释,该函数根据空间中的点与图像中的点的对应关系,求解相机在空间中的位置。也就是说,我知道一些空间当中点的坐标,还知道这些点在图像中的像素坐标,那么solvePnP就可以告诉我相机在空间当中的坐标。solvePnP和solvePnPRansac所实现的功能相同,只不过后者使用了随机一致性采样,使其对噪声更鲁棒,本文使用后者。

好了,有这么好的函数,怎么用于我们的三维重建呢?首先,使用双目重建的方法,对头两幅图像进行重建,这样就得到了一些空间中的点,加入第三幅图像后,使其与第二幅图像进行特征匹配,这些匹配点中,肯定有一部分也是图像二与图像一之间的匹配点,也就是说,这些匹配点中有一部分的空间坐标是已知的,同时又知道这些点在第三幅图像中的像素坐标,嗯,solvePnP所需的信息都有了,自然第三个相机的空间位置就求出来了。由于空间点的坐标都是世界坐标系下的(即第一个相机的坐标系),所以由solvePnP求出的相机位置也是世界坐标系下的,即相机三到相机一的变换矩阵。

加入更多图像

通过上面的方法得到相机三的变换矩阵后,就可以使用上一篇文章提到的triangulatePoints方法将图像三和图像二之间的匹配点三角化,得到其空间坐标。为了使之后的图像仍能使用以上方法求解变换矩阵,我们还需要将新得到的空间点和之前的三维点云融合。已经存在的空间点,就没必要再添加了,只添加在图像二和三之间匹配,但在图像一和图像三中没有匹配的点。如此反复。 
多目重建流程 
为了方便点云的融合以及今后的扩展,我们需要存储图像中每个特征点在空间中的对应点。在代码中使用了一个二维列表,名字为correspond_struct_idx,correspond_struct_idx[i][j]代表第i幅图像第j个特征点所对应的空间点在点云中的索引,若索引小于零,说明该特征点在空间当中没有对应点。通过此结构,由特征匹配中的queryIdx和trainIdx就可以查询某个特征点在空间中的位置。

代码实现

前一篇文章的很多代码不用修改,还可以继续使用,但是程序的流程有了较大变化。首先是初始化点云,也就是通过双目重建方法对图像序列的头两幅图像进行重建,并初始化correspond_struct_idx。

void init_structure(    Mat K,    vector<vector<KeyPoint>>& key_points_for_all,     vector<vector<Vec3b>>& colors_for_all,    vector<vector<DMatch>>& matches_for_all,    vector<Point3f>& structure,    vector<vector<int>>& correspond_struct_idx,    vector<Vec3b>& colors,    vector<Mat>& rotations,    vector<Mat>& motions    ){    //计算头两幅图像之间的变换矩阵    vector<Point2f> p1, p2;    vector<Vec3b> c2;    Mat R, T;   //旋转矩阵和平移向量    Mat mask;   //mask中大于零的点代表匹配点,等于零代表失配点    get_matched_points(key_points_for_all[0], key_points_for_all[1], matches_for_all[0], p1, p2);    get_matched_colors(colors_for_all[0], colors_for_all[1], matches_for_all[0], colors, c2);    find_transform(K, p1, p2, R, T, mask);    //对头两幅图像进行三维重建    maskout_points(p1, mask);    maskout_points(p2, mask);    maskout_colors(colors, mask);    Mat R0 = Mat::eye(3, 3, CV_64FC1);    Mat T0 = Mat::zeros(3, 1, CV_64FC1);    reconstruct(K, R0, T0, R, T, p1, p2, structure);    //保存变换矩阵    rotations = { R0, R };    motions = { T0, T };    //将correspond_struct_idx的大小初始化为与key_points_for_all完全一致    correspond_struct_idx.clear();    correspond_struct_idx.resize(key_points_for_all.size());    for (int i = 0; i < key_points_for_all.size(); ++i)    {        correspond_struct_idx[i].resize(key_points_for_all[i].size(), -1);    }    //填写头两幅图像的结构索引    int idx = 0;    vector<DMatch>& matches = matches_for_all[0];    for (int i = 0; i < matches.size(); ++i)    {        if (mask.at<uchar>(i) == 0)            continue;        correspond_struct_idx[0][matches[i].queryIdx] = idx;        correspond_struct_idx[1][matches[i].trainIdx] = idx;        ++idx;    }}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54

初始点云得到后,就可以使用增量方式重建剩余图像,注意,在代码中为了方便实现,所有图像之间的特征匹配已经事先完成了,并保存在matches_for_all这个列表中。增量重建的关键是调用solvePnPRansac,而这个函数需要空间点坐标和对应的像素坐标作为参数,有了correspond_struct_idx,实现这个对应关系的查找还是很方便的,如下。

void get_objpoints_and_imgpoints(    vector<DMatch>& matches,    vector<int>& struct_indices,     vector<Point3f>& structure,     vector<KeyPoint>& key_points,    vector<Point3f>& object_points,    vector<Point2f>& image_points){    object_points.clear();    image_points.clear();    for (int i = 0; i < matches.size(); ++i)    {        int query_idx = matches[i].queryIdx;        int train_idx = matches[i].trainIdx;        int struct_idx = struct_indices[query_idx];        if (struct_idx < 0) continue;        object_points.push_back(structure[struct_idx]);        image_points.push_back(key_points[train_idx].pt);    }}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

之后调用solvePnPRansac得到相机的旋转向量和位移,由于我们使用的都是旋转矩阵,所以这里要调用opencv的Rodrigues函数将旋转向量变换为旋转矩阵。之后,使用上一篇文章中用到的reconstruct函数对匹配点进行重建(三角化),不过为了适用于多目重建,做了一些简单修改。

void reconstruct(Mat& K, Mat& R1, Mat& T1, Mat& R2, Mat& T2, vector<Point2f>& p1, vector<Point2f>& p2, vector<Point3f>& structure){    //两个相机的投影矩阵[R T],triangulatePoints只支持float型    Mat proj1(3, 4, CV_32FC1);    Mat proj2(3, 4, CV_32FC1);    R1.convertTo(proj1(Range(0, 3), Range(0, 3)), CV_32FC1);    T1.convertTo(proj1.col(3), CV_32FC1);    R2.convertTo(proj2(Range(0, 3), Range(0, 3)), CV_32FC1);    T2.convertTo(proj2.col(3), CV_32FC1);    Mat fK;    K.convertTo(fK, CV_32FC1);    proj1 = fK*proj1;    proj2 = fK*proj2;    //三角重建    Mat s;    triangulatePoints(proj1, proj2, p1, p2, s);    structure.clear();    structure.reserve(s.cols);    for (int i = 0; i < s.cols; ++i)    {        Mat_<float> col = s.col(i);        col /= col(3);  //齐次坐标,需要除以最后一个元素才是真正的坐标值        structure.push_back(Point3f(col(0), col(1), col(2)));    }}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

最后,将重建结构与之前的点云进行融合。

void fusion_structure(    vector<DMatch>& matches,     vector<int>& struct_indices,     vector<int>& next_struct_indices,    vector<Point3f>& structure,     vector<Point3f>& next_structure,    vector<Vec3b>& colors,    vector<Vec3b>& next_colors    ){    for (int i = 0; i < matches.size(); ++i)    {        int query_idx = matches[i].queryIdx;        int train_idx = matches[i].trainIdx;        int struct_idx = struct_indices[query_idx];        if (struct_idx >= 0) //若该点在空间中已经存在,则这对匹配点对应的空间点应该是同一个,索引要相同        {            next_struct_indices[train_idx] = struct_idx;            continue;        }        //若该点在空间中已经存在,将该点加入到结构中,且这对匹配点的空间点索引都为新加入的点的索引        structure.push_back(next_structure[i]);        colors.push_back(next_colors[i]);        struct_indices[query_idx] = next_struct_indices[train_idx] = structure.size() - 1;    }}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

整个增量方式重建图像的代码大致如下。

//初始化结构(三维点云)init_structure(    K,    key_points_for_all,    colors_for_all,    matches_for_all,    structure,    correspond_struct_idx,    colors,    rotations,    motions    );//增量方式重建剩余的图像for (int i = 1; i < matches_for_all.size(); ++i){    vector<Point3f> object_points;    vector<Point2f> image_points;    Mat r, R, T;    //Mat mask;    //获取第i幅图像中匹配点对应的三维点,以及在第i+1幅图像中对应的像素点    get_objpoints_and_imgpoints(        matches_for_all[i],         correspond_struct_idx[i],         structure,        key_points_for_all[i+1],         object_points,        image_points        );    //求解变换矩阵    solvePnPRansac(object_points, image_points, K, noArray(), r, T);    //将旋转向量转换为旋转矩阵    Rodrigues(r, R);    //保存变换矩阵    rotations.push_back(R);    motions.push_back(T);    vector<Point2f> p1, p2;    vector<Vec3b> c1, c2;    get_matched_points(key_points_for_all[i], key_points_for_all[i + 1], matches_for_all[i], p1, p2);    get_matched_colors(colors_for_all[i], colors_for_all[i + 1], matches_for_all[i], c1, c2);    //根据之前求得的R,T进行三维重建    vector<Point3f> next_structure;    reconstruct(K, rotations[i], motions[i], R, T, p1, p2, next_structure);    //将新的重建结果与之前的融合    fusion_structure(        matches_for_all[i],         correspond_struct_idx[i],         correspond_struct_idx[i + 1],        structure,         next_structure,        colors,        c1        );}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59

测试

我用了八幅图像进行测试,正如问题简化中所要求的那样,图像是有序的。 
图片序列 
程序的大部分时间花在特征提取和匹配上,真正的重建过程耗时很少。最终结果如下。 
重建结果 
图中每个彩色坐标系都代表一个相机。


思考

  • 这个多目三维重建程序,要求图像必须是有序的,如果图像无序,比如只是对某个目标在不同角度的随意拍摄,程序应该如何修改?
  • 增量式三维重建方法,有一个很大的缺点——随着图像的不断增加,误差会不断累积,最后误差过大以至于完全偏离重建的目标,怎么解决?

有兴趣的读者可以思考一下上面两个问题,第二个问题比较难,我会在下一篇文章中详细介绍。


程序

程序运行后会将三维重建的结果写入Viewer目录下的structure.yml文件中,在Viewer目录下有一个SfMViewer程序,直接运行即可读取yml文件并显示三维结构。

完整程序:

#include <opencv2/xfeatures2d/nonfree.hpp>#include <opencv2/features2d/features2d.hpp>#include <opencv2/highgui/highgui.hpp>#include <opencv2/calib3d/calib3d.hpp>#include <iostream>#include "tinydir.h"using namespace cv;using namespace std;#define OUTPUT_NAME "Viewer/structure.yml"void extract_features(        vector<string>& image_names,        vector<vector<KeyPoint>>& key_points_for_all,        vector<Mat>& descriptor_for_all,        vector<vector<Vec3b>>& colors_for_all        ){        key_points_for_all.clear();        descriptor_for_all.clear();        Mat image;        Ptr<Feature2D> sift = xfeatures2d::SIFT::create(0, 3, 0.04, 10);        for (auto it = image_names.begin(); it != image_names.end(); ++it)        {                image = imread(*it);                if (image.empty()) continue;                cout << "Extracing features: " << *it << endl;                vector<KeyPoint> key_points;                Mat descriptor;                sift->detectAndCompute(image, noArray(), key_points, descriptor);                if (key_points.size() <= 10) continue;                key_points_for_all.push_back(key_points);                descriptor_for_all.push_back(descriptor);                vector<Vec3b> colors(key_points.size());                for (int i = 0; i < key_points.size(); ++i)                {                        Point2f& p = key_points[i].pt;                        colors[i] = image.at<Vec3b>(p.y, p.x);                }                colors_for_all.push_back(colors);        }}void match_features(Mat& query, Mat& train, vector<DMatch>& matches){        vector<vector<DMatch>> knn_matches;        BFMatcher matcher(NORM_L2);        matcher.knnMatch(query, train, knn_matches, 2);        //获取满足Ratio Test的最小匹配的距离        float min_dist = FLT_MAX;        for (int r = 0; r < knn_matches.size(); ++r)        {                //Ratio Test                if (knn_matches[r][0].distance > 0.6*knn_matches[r][1].distance)                        continue;                float dist = knn_matches[r][0].distance;                if (dist < min_dist) min_dist = dist;        }        matches.clear();        for (size_t r = 0; r < knn_matches.size(); ++r)        {                //排除不满足Ratio Test的点和匹配距离过大的点                if (                        knn_matches[r][0].distance > 0.6*knn_matches[r][1].distance ||                        knn_matches[r][0].distance > 5 * max(min_dist, 10.0f)                        )                        continue;                //保存匹配点                matches.push_back(knn_matches[r][0]);        }}void match_features(vector<Mat>& descriptor_for_all, vector<vector<DMatch>>& matches_for_all){        matches_for_all.clear();        // n个图像,两两顺次有 n-1 对匹配        // 1与2匹配,2与3匹配,3与4匹配,以此类推        for (int i = 0; i < descriptor_for_all.size() - 1; ++i)        {                cout << "Matching images " << i << " - " << i + 1 << endl;                vector<DMatch> matches;                match_features(descriptor_for_all[i], descriptor_for_all[i + 1], matches);                matches_for_all.push_back(matches);        }}bool find_transform(Mat& K, vector<Point2f>& p1, vector<Point2f>& p2, Mat& R, Mat& T, Mat& mask){        //根据内参矩阵获取相机的焦距和光心坐标(主点坐标)        double focal_length = 0.5*(K.at<double>(0) + K.at<double>(4));        Point2d principle_point(K.at<double>(2), K.at<double>(5));        //根据匹配点求取本征矩阵,使用RANSAC,进一步排除失配点        Mat E = findEssentialMat(p1, p2, focal_length, principle_point, RANSAC, 0.999, 1.0, mask);        if (E.empty()) return false;        double feasible_count = countNonZero(mask);        cout << (int)feasible_count << " -in- " << p1.size() << endl;        //对于RANSAC而言,outlier数量大于50%时,结果是不可靠的        if (feasible_count <= 15 || (feasible_count / p1.size()) < 0.6)                return false;        //分解本征矩阵,获取相对变换        int pass_count = recoverPose(E, p1, p2, R, T, focal_length, principle_point, mask);        //同时位于两个相机前方的点的数量要足够大        if (((double)pass_count) / feasible_count < 0.7)                return false;        return true;}void get_matched_points(        vector<KeyPoint>& p1,        vector<KeyPoint>& p2,        vector<DMatch> matches,        vector<Point2f>& out_p1,        vector<Point2f>& out_p2        ){        out_p1.clear();        out_p2.clear();        for (int i = 0; i < matches.size(); ++i)        {                out_p1.push_back(p1[matches[i].queryIdx].pt);                out_p2.push_back(p2[matches[i].trainIdx].pt);        }}void get_matched_colors(        vector<Vec3b>& c1,        vector<Vec3b>& c2,        vector<DMatch> matches,        vector<Vec3b>& out_c1,        vector<Vec3b>& out_c2        ){        out_c1.clear();        out_c2.clear();        for (int i = 0; i < matches.size(); ++i)        {                out_c1.push_back(c1[matches[i].queryIdx]);                out_c2.push_back(c2[matches[i].trainIdx]);        }}void reconstruct(Mat& K, Mat& R1, Mat& T1, Mat& R2, Mat& T2, vector<Point2f>& p1, vector<Point2f>& p2, vector<Point3f>& structure){        //两个相机的投影矩阵[R T],triangulatePoints只支持float型        Mat proj1(3, 4, CV_32FC1);        Mat proj2(3, 4, CV_32FC1);        R1.convertTo(proj1(Range(0, 3), Range(0, 3)), CV_32FC1);        T1.convertTo(proj1.col(3), CV_32FC1);        R2.convertTo(proj2(Range(0, 3), Range(0, 3)), CV_32FC1);        T2.convertTo(proj2.col(3), CV_32FC1);        Mat fK;        K.convertTo(fK, CV_32FC1);        proj1 = fK*proj1;        proj2 = fK*proj2;        //三角重建        Mat s;        triangulatePoints(proj1, proj2, p1, p2, s);        structure.clear();        structure.reserve(s.cols);        for (int i = 0; i < s.cols; ++i)        {                Mat_<float> col = s.col(i);                col /= col(3);  //齐次坐标,需要除以最后一个元素才是真正的坐标值                structure.push_back(Point3f(col(0), col(1), col(2)));        }}void maskout_points(vector<Point2f>& p1, Mat& mask){        vector<Point2f> p1_copy = p1;        p1.clear();        for (int i = 0; i < mask.rows; ++i)        {                if (mask.at<uchar>(i) > 0)                        p1.push_back(p1_copy[i]);        }}void maskout_colors(vector<Vec3b>& p1, Mat& mask){        vector<Vec3b> p1_copy = p1;        p1.clear();        for (int i = 0; i < mask.rows; ++i)        {                if (mask.at<uchar>(i) > 0)                        p1.push_back(p1_copy[i]);        }}void save_structure(string file_name, vector<Mat>& rotations, vector<Mat>& motions, vector<Point3f>& structure, vector<Vec3b>& colors){        int n = (int)rotations.size();        FileStorage fs(file_name, FileStorage::WRITE);        fs << "Camera Count" << n;        fs << "Point Count" << (int)structure.size();        fs << "Rotations" << "[";        for (size_t i = 0; i < n; ++i)        {                fs << rotations[i];        }        fs << "]";        fs << "Motions" << "[";        for (size_t i = 0; i < n; ++i)        {                fs << motions[i];        }        fs << "]";        fs << "Points" << "[";        for (size_t i = 0; i < structure.size(); ++i)        {                fs << structure[i];        }        fs << "]";        fs << "Colors" << "[";        for (size_t i = 0; i < colors.size(); ++i)        {                fs << colors[i];        }        fs << "]";        fs.release();}void get_objpoints_and_imgpoints(        vector<DMatch>& matches,        vector<int>& struct_indices,        vector<Point3f>& structure,        vector<KeyPoint>& key_points,        vector<Point3f>& object_points,        vector<Point2f>& image_points){        object_points.clear();        image_points.clear();        for (int i = 0; i < matches.size(); ++i)        {                int query_idx = matches[i].queryIdx;                int train_idx = matches[i].trainIdx;                int struct_idx = struct_indices[query_idx];                if (struct_idx < 0) continue;                object_points.push_back(structure[struct_idx]);                image_points.push_back(key_points[train_idx].pt);        }}void fusion_structure(        vector<DMatch>& matches,        vector<int>& struct_indices,        vector<int>& next_struct_indices,        vector<Point3f>& structure,        vector<Point3f>& next_structure,        vector<Vec3b>& colors,        vector<Vec3b>& next_colors        ){        for (int i = 0; i < matches.size(); ++i)        {                int query_idx = matches[i].queryIdx;                int train_idx = matches[i].trainIdx;                int struct_idx = struct_indices[query_idx];                if (struct_idx >= 0) //若该点在空间中已经存在,则这对匹配点对应的空间点应该是同一个,索引要相同                {                        next_struct_indices[train_idx] = struct_idx;                        continue;                }                //若该点在空间中已经存在,将该点加入到结构中,且这对匹配点的空间点索引都为新加入的点的索引                structure.push_back(next_structure[i]);                colors.push_back(next_colors[i]);                struct_indices[query_idx] = next_struct_indices[train_idx] = structure.size() - 1;        }}void init_structure(        Mat K,        vector<vector<KeyPoint>>& key_points_for_all,        vector<vector<Vec3b>>& colors_for_all,        vector<vector<DMatch>>& matches_for_all,        vector<Point3f>& structure,        vector<vector<int>>& correspond_struct_idx,        vector<Vec3b>& colors,        vector<Mat>& rotations,        vector<Mat>& motions        ){        //计算头两幅图像之间的变换矩阵        vector<Point2f> p1, p2;        vector<Vec3b> c2;        Mat R, T;       //旋转矩阵和平移向量        Mat mask;       //mask中大于零的点代表匹配点,等于零代表失配点        get_matched_points(key_points_for_all[0], key_points_for_all[1], matches_for_all[0], p1, p2);        get_matched_colors(colors_for_all[0], colors_for_all[1], matches_for_all[0], colors, c2);        find_transform(K, p1, p2, R, T, mask);        //对头两幅图像进行三维重建        maskout_points(p1, mask);        maskout_points(p2, mask);        maskout_colors(colors, mask);        Mat R0 = Mat::eye(3, 3, CV_64FC1);        Mat T0 = Mat::zeros(3, 1, CV_64FC1);        reconstruct(K, R0, T0, R, T, p1, p2, structure);        //保存变换矩阵        rotations = { R0, R };        motions = { T0, T };        //将correspond_struct_idx的大小初始化为与key_points_for_all完全一致        correspond_struct_idx.clear();        correspond_struct_idx.resize(key_points_for_all.size());        for (int i = 0; i < key_points_for_all.size(); ++i)        {                correspond_struct_idx[i].resize(key_points_for_all[i].size(), -1);        }        //填写头两幅图像的结构索引        int idx = 0;        vector<DMatch>& matches = matches_for_all[0];        for (int i = 0; i < matches.size(); ++i)        {                if (mask.at<uchar>(i) == 0)                        continue;                correspond_struct_idx[0][matches[i].queryIdx] = idx;                correspond_struct_idx[1][matches[i].trainIdx] = idx;                ++idx;        }}void get_file_names(string dir_name, vector<string> & names){        names.clear();        tinydir_dir dir;        tinydir_open(&dir, dir_name.c_str());        while (dir.has_next)        {                tinydir_file file;                tinydir_readfile(&dir, &file);                if (!file.is_dir)                {            //cout<<file.path<<endl;                        names.push_back(file.path);                }                tinydir_next(&dir);        }        tinydir_close(&dir);}int main(){        vector<string> img_names;        get_file_names("images", img_names);    for (auto it = img_names.begin(); it != img_names.end(); ++it)    {         cout<<*it<<endl;    }        Mat K(Matx33d(                2759.48, 0, 1520.69,                0, 2764.16, 1006.81,                0,      0,      1));        vector<vector<KeyPoint>> key_points_for_all;        vector<Mat> descriptor_for_all;        vector<vector<Vec3b>> colors_for_all;        vector<vector<DMatch>> matches_for_all;        extract_features(img_names, key_points_for_all, descriptor_for_all, colors_for_all);        match_features(descriptor_for_all, matches_for_all);        vector<Point3f> structure;        vector<vector<int>> correspond_struct_idx;   vector<Vec3b> colors;        vector<Mat> rotations;        vector<Mat> motions;        init_structure(                K,                key_points_for_all,                colors_for_all,                matches_for_all,                structure,                correspond_struct_idx,                colors,                rotations,                motions                );        //增量方式重建剩余的图像        for (int i = 1; i < matches_for_all.size(); ++i)        {                vector<Point3f> object_points;                vector<Point2f> image_points;                Mat r, R, T;                //Mat mask;                //获取第i幅图像中匹配点对应的三维点,以及在第i+1幅图像中对应的像素点                get_objpoints_and_imgpoints(                        matches_for_all[i],                        correspond_struct_idx[i],                        structure,                        key_points_for_all[i+1],                        object_points,                        image_points                        );                //求解变换矩阵                solvePnPRansac(object_points, image_points, K, noArray(), r, T);                //将旋转向量转换为旋转矩阵                Rodrigues(r, R);                //保存变换矩阵                rotations.push_back(R);                motions.push_back(T);                vector<Point2f> p1, p2;                vector<Vec3b> c1, c2;                get_matched_points(key_points_for_all[i], key_points_for_all[i + 1], matches_for_all[i], p1, p2);                get_matched_colors(colors_for_all[i], colors_for_all[i + 1], matches_for_all[i], c1, c2);                //根据之前求得的R,T进行三维重建                vector<Point3f> next_structure;                reconstruct(K, rotations[i], motions[i], R, T, p1, p2, next_structure);                //将新的重建结果与之前的融合                fusion_structure(                        matches_for_all[i],                        correspond_struct_idx[i],                        correspond_struct_idx[i + 1],                        structure,                        next_structure,                        colors,                        c1                        );        }        //保存        save_structure(OUTPUT_NAME, rotations, motions, structure, colors);    cout<<OUTPUT_NAME<<endl;    return 0;}
CXX ?= g++CXXFLAGS += -c -std=c++11 -Wall $(shell pkg-config --cflags opencv)LDFLAGS +=  -lopencv_xfeatures2d -lopencv_highgui -lopencv_core -lopencv_imgproc -lopencv_imgcodecs -lopencv_calib3d -lopencv_features2dall: opencv_exampleopencv_example: example.o; $(CXX) $< -o $@ $(LDFLAGS)%.o: %.cpp; $(CXX) $< -o $@ $(CXXFLAGS)clean: ; rm -f example.o opencv_example

原创粉丝点击