VINS ---初始化

来源:互联网 发布:投资软件靠谱吗 编辑:程序博客网 时间:2024/06/08 19:23

通过匹配的视觉特征点 对极约束获取本质矩阵E,再分解本质矩阵获得相机的R,T

详细的原理解释看这里 http://blog.csdn.net/aichipmunk/article/details/48157369

bool MotionEstimator::solveRelativeRT(const vector<pair<Vector3d, Vector3d>> &corres, Matrix3d &Rotation, Vector3d &Translation){ if (corres.size() >= 15) { vector<cv::Point2f> ll, rr; for (int i = 0; i < int(corres.size()); i++) { ll.push_back(cv::Point2f(corres[i].first(0), corres[i].first(1))); rr.push_back(cv::Point2f(corres[i].second(0), corres[i].second(1))); } cv::Mat mask; //根据匹配点求取本征矩阵,使用RANSAC,进一步排除失配点 cv::Mat E = cv::findFundamentalMat(ll, rr, cv::FM_RANSAC, 0.3 / 460, 0.99, mask); //double feasible_count = countNonZero(mask); //cout << (int)feasible_count << " -in- " << p1.size() << endl; cv::Mat cameraMatrix = (cv::Mat_<double>(3, 3) << 1, 0, 0, 0, 1, 0, 0, 0, 1); cv::Mat rot, trans; //分解本征矩阵,获取相对变换 int inlier_cnt = cv::recoverPose(E, ll, rr, cameraMatrix, rot, trans, mask); //cout << "inlier_cnt " << inlier_cnt << endl; Eigen::Matrix3d R; Eigen::Vector3d T; for (int i = 0; i < 3; i++) { T(i) = trans.at<double>(i, 0); for (int j = 0; j < 3; j++) R(i, j) = rot.at<double>(i, j); } Rotation = R.transpose(); Translation = -R.transpose() * T; if(inlier_cnt > 12) return true; else return false; } return false;}}

通过imu预积分陀螺仪和视觉特征匹配分解Fundamental矩阵获取rotationMatrix之间的约束关系,联立方程组可以求得外参旋转矩阵;

接下来会检测当前frame_count是否达到WINDOW_SIZE,确保有足够的frame参与初始化;

bool Estimator::initialStructure();

1. 保证imu充分运动,只需要考察线加速度的变化,计算窗口中加速度的标准差,标准差大于0.25则代表imu充分激励,足够初始化(这一部分在ios版本实现中被注释掉了,不知道为什么):


    {        map<double, ImageFrame>::iterator frame_it;        Vector3d sum_g;        for (frame_it = all_image_frame.begin(), frame_it++; frame_it != all_image_frame.end(); frame_it++)        {            double dt = frame_it->second.pre_integration->sum_dt;            Vector3d tmp_g = frame_it->second.pre_integration->delta_v / dt;            sum_g += tmp_g;        }        Vector3d aver_g;        aver_g = sum_g * 1.0 / ((int)all_image_frame.size() - 1);        // Standard deviation of linear_acceleration        double var = 0;        for (frame_it = all_image_frame.begin(), frame_it++; frame_it != all_image_frame.end(); frame_it++)        {            double dt = frame_it->second.pre_integration->sum_dt;            Vector3d tmp_g = frame_it->second.pre_integration->delta_v / dt;            var += (tmp_g - aver_g).transpose() * (tmp_g - aver_g);            //cout << "frame g " << tmp_g.transpose() << endl;        }        var = sqrt(var / ((int)all_image_frame.size() - 1));        //ROS_WARN("IMU variation %f!", var);        if(var < 0.25)        {            ROS_INFO("IMU excitation not enouth!");            //return false;        }    }

 2. 纯视觉初始化,对Sliding Window中的图像帧和相机姿态求解sfm问题:
  a. 首先通过FeatureManeger获取特征匹配,考察最新的keyFrame和sliding window中某个keyFrame之间有足够feature匹配和足够大的视差(id为l),这两帧之间通过五点法恢复出R,t并且三角化出3D的feature point:
relativePose(relative_R, relative_T, l)
  b. 3D的feature point和sliding window中的keyFrame的2D feature求解PnP,并且使用ceres优化:
sfm.construct(frame_count + 1, Q, T, l, relative_R, relative_T, sfm_f, sfm_tracked_points)
  c. 所有的frame求解PnP
cv::solvePnP(pts_3_vector, pts_2_vector, K, D, rvec, t,1)
3. imu与视觉对齐,获取绝对尺度
bool Estimator::visualInitialAlign()
  a.  求解陀螺仪零偏metric scale,这里的metric scale指的是imu和sfm结果进行对齐需要的比例:
bool result = VisualIMUAlignment(all_image_frame, Bgs, g, x);bool VisualIMUAlignment(map<double, ImageFrame> &all_image_frame, Vector3d* Bgs, Vector3d &g, VectorXd &x){    solveGyroscopeBias(all_image_frame, Bgs);    if(SolveScale(all_image_frame, g, x))        return true;    else         return false;}


  b. 初始化成功,则对于imu数据需要repropogate,也就是从当前时刻开始预积分;同时通过三角化和上一步计算的scale可以获得每个feature的深度;

至此,视觉和imu已经对齐

参考:

http://www.cnblogs.com/shang-slam/p/7147095.html

原创粉丝点击