ceres_solver

来源:互联网 发布:鲨鱼网络 编辑:程序博客网 时间:2024/06/06 14:01

1:google 官网 http://ceres-solver.org/

2:一篇中文博客讲的很好:http://m.blog.csdn.net/HUAJUN998/article


简单的总结一下ceres solver

google的ceres-slover 个人感觉和g2o差不多。

(1)最重要的就是构建CostFunction。根据选择的微分模型的不同有三种构建方式(自动微分,数值微分,手动微分)

          1:对于AutoDiffCostFunction类型的CostFunction,我们构造一个结构体,重写template operator(),注意类型为模板类型,重新定义了()函数,将结构体作为AutoDiffCostFunction的参数。

        

// structstruct CostFunctor {   template <typename T>     bool operator()(const T* const x, T* residual) const {      residual[0] = T(10.0) - x[0];     return true;   }};// make CostFunctionCostFunction* cost_function = new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);problem.AddResidualBlock(cost_function, NULL, &x);

      2:对于NumericDiffCostFunction类型的CostFunction,与AutoDiffCostFunction类似,只不过将结构体的接收类型不再是模板类型,用double类型代替了模板类型

      

// structstruct NumericDiffCostFunctor {  bool operator()(const double* const x, double* residual) const {    residual[0] = 10.0 - x[0];    return true;  }};// make CostFunctionCostFunction* cost_function =new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1>(  new NumericDiffCostFunctor);problem.AddResidualBlock(cost_function, NULL, &x);
      3: 在有些情况下,不使用AutoDiffCostFunction,例如我们用近似的方式计算导数,而不是用AutoDiff的链式法则,我们需要自己的残差和Jacobin计算。这时我们定义一个CostFunction或者SizedCostFunction的子类。

class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> { public:  virtual ~QuadraticCostFunction() {}  virtual bool Evaluate(double const* const* parameters,                        double* residuals,                        double** jacobians) const {    const double x = parameters[0][0];    residuals[0] = 10 - x;    // Compute the Jacobian if asked for.    if (jacobians != NULL && jacobians[0] != NULL) {      jacobians[0][0] = -1;    }    return true;  }};
基本都用自动微分(链式求导法则)。数值微分是有线性误差的,而且这也会导致收敛比较慢。手动求导容易出错(g2o用的就是手动求导,有没有感觉很麻烦)
(2)添加误差项Problem.AddResidualBlock(cost_fuction,NULL/loss_function,input_param1,input_param2,...)

中loss_function的目的是排除外点。即误差较大的项被剔除。除此之外,还有个函数和Problem.AddResidualBlock类似

void Problem::AddParameterBlock(double *values, int size, LocalParameterization*local_parameterization)
void Problem::AddParameterBlock(double *values, int size)
这个函数的目的是告诉Problem在目标函数中有哪些是变量。其实这个可以不添加。Google的官网有说明:

The user has the option of explicitly adding the parameter blocks using AddParameterBlock. This causes additional correctness checking; however, AddResidualBlock implicitly adds the parameter blocks if they are not present, so calling AddParameterBlock explicitly is not required.

但也不是完全没用,比如要固定一些变量,就需要设定Problem::SetParametersBlockConstant(&x),x为你要设定的固定变量。

这里还需要注意的是在构建cost_function时,例如

CostFunction* cost_function = new AutoDiffCostFunction<CostFunctor, 1, 1,1>(new CostFunctor); CostFunction* cost_function = new AutoDiffCostFunction<CostFunctor, 1, 1,1>(new CostFunctor); 
这里的第一个1代表误差项的个数,第二个1代表 input_param1的 维数。第三个1代表input_param2的维数。即cost_function 要和AddResidualBlock相对应


(3)求解器的构建

  Solver::Options options;  options.minimizer_progress_to_stdout = true;  Solver::Summary summary;  Solve(options, &problem, &summary);
以上几步骤是必须的。以下部分是根据需要选择的部分(主要是BA时的一些选项)

DEFINE_string(trust_region_strategy, "levenberg_marquardt",              "Options are: levenberg_marquardt, dogleg.");DEFINE_string(dogleg, "traditional_dogleg", "Options are: traditional_dogleg,"              "subspace_dogleg.");DEFINE_bool(inner_iterations, false, "Use inner iterations to non-linearly "            "refine each successful trust region step.");DEFINE_string(blocks_for_inner_iterations, "automatic", "Options are: "            "automatic, cameras, points, cameras,points, points,cameras");DEFINE_string(linear_solver, "sparse_schur", "Options are: "              "sparse_schur, dense_schur, iterative_schur, sparse_normal_cholesky, "              "dense_qr, dense_normal_cholesky and cgnr.");DEFINE_bool(explicit_schur_complement, false, "If using ITERATIVE_SCHUR "            "then explicitly compute the Schur complement.");DEFINE_string(preconditioner, "jacobi", "Options are: "              "identity, jacobi, schur_jacobi, cluster_jacobi, "              "cluster_tridiagonal.");DEFINE_string(visibility_clustering, "canonical_views",              "single_linkage, canonical_views");DEFINE_string(sparse_linear_algebra_library, "suite_sparse",              "Options are: suite_sparse and cx_sparse.");DEFINE_string(dense_linear_algebra_library, "eigen",              "Options are: eigen and lapack.");DEFINE_string(ordering, "automatic", "Options are: automatic, user.");DEFINE_bool(use_quaternions, false, "If true, uses quaternions to represent "            "rotations. If false, angle axis is used.");DEFINE_bool(use_local_parameterization, false, "For quaternions, use a local "            "parameterization.");DEFINE_bool(robustify, false, "Use a robust loss function.");DEFINE_double(eta, 1e-2, "Default value for eta. Eta determines the "             "accuracy of each linear solve of the truncated newton step. "             "Changing this parameter can affect solve performance.");


提到bundle adjustment,其实在slam和vio优化中主要解决的就是这块。由于Bundle adjustment 本身具有稀疏的结构那使得我们可以利用它稀疏的性质做出更有效的求解策略。在ceres-solver中的SPARSE_SCHUR,DENSE_SCHUR,ITERARIVE_SCHUR就充分利用了BA的稀疏特性。我们可以定义Options::ordering_type=ceres::SCHUR 它将自动决定ParameterBlock ordering.当然了也可以手动设置ParameterBlock ordering.

在ceres-solver的求解器中

Solve(options, &problem, &summary)

summary没有什么特别的,声明一下就可以了。Problem上面也提到了主要是添加变量的。option在应用中还是有许多知识的,下面就简单说说option在BA中的一些应用

下面是设置优化过程中每次迭代的一些参数。

void SetMinimizerOptions(Solver::Options* options) {  options->max_num_iterations = FLAGS_num_iterations;//最大迭代次数  options->minimizer_progress_to_stdout = true;//是否输出处理过程  options->num_threads = FLAGS_num_threads;//线程数  options->eta = FLAGS_eta;//没次迭代的精度  options->max_solver_time_in_seconds = FLAGS_max_solver_time;//最大求解时间  options->use_nonmonotonic_steps = FLAGS_nonmonotonic_steps;//使用非单调步骤  if (FLAGS_line_search) {    options->minimizer_type = ceres::LINE_SEARCH;//设置最小化类型为线性搜索  }
下面是介绍是优化过程中每次的迭代的线性求解方式的一些设置

void SetLinearSolver(Solver::Options* options) {  CHECK(StringToLinearSolverType(FLAGS_linear_solver,                                 &options->linear_solver_type));  CHECK(StringToPreconditionerType(FLAGS_preconditioner,                                   &options->preconditioner_type));  CHECK(StringToVisibilityClusteringType(FLAGS_visibility_clustering,                                         &options->visibility_clustering_type));  CHECK(StringToSparseLinearAlgebraLibraryType(            FLAGS_sparse_linear_algebra_library,            &options->sparse_linear_algebra_library_type));  CHECK(StringToDenseLinearAlgebraLibraryType(            FLAGS_dense_linear_algebra_library,            &options->dense_linear_algebra_library_type));  options->num_linear_solver_threads = FLAGS_num_threads;  options->use_explicit_schur_complement = FLAGS_explicit_schur_complement;}
其中options->linear_solver_type

官方是这样解释的:

Default: SPARSE_NORMAL_CHOLESKY / DENSE_QR

Type of linear solver used to compute the solution to the linear least squares problem in each iteration of the Levenberg-Marquardt algorithm. If Ceres is built with support for SuiteSparse orCXSparse or Eigen‘s sparse Cholesky factorization, the default is SPARSE_NORMAL_CHOLESKY, it is DENSE_QR otherwise.

有啥区别,我还没看源代码(请大家自己看一下源代码吧)其他的我直接从官网copy过来吧,反正也不太懂区别

VisibilityClusteringType Solver::Options::visibility_clustering_type

Default: CANONICAL_VIEWS

Type of clustering algorithm to use when constructing a visibility based preconditioner. The original visibility based preconditioning paper and implementation only used the canonical views algorithm.

This algorithm gives high quality results but for large dense graphs can be particularly expensive. As its worst case complexity is cubic in size of the graph.

Another option is to use SINGLE_LINKAGE which is a simple thresholded single linkage clustering algorithm that only pays attention to tightly coupled blocks in the Schur complement. This is a fast algorithm that works well.

The optimal choice of the clustering algorithm depends on the sparsity structure of the problem, but generally speaking we recommend that you try CANONICAL_VIEWS first and if it is too expensive try SINGLE_LINKAGE.

DenseLinearAlgebraLibrary Solver::Options::dense_linear_algebra_library_type

Default:EIGEN

Ceres supports using multiple dense linear algebra libraries for dense matrix factorizations. Currently EIGEN and LAPACK are the valid choices. EIGEN is always available, LAPACK refers to the system BLAS + LAPACK library which may or may not be available.

This setting affects the DENSE_QRDENSE_NORMAL_CHOLESKY and DENSE_SCHUR solvers. For small to moderate sized probem EIGEN is a fine choice but for large problems, an optimized LAPACK + BLASimplementation can make a substantial difference in performance.

SparseLinearAlgebraLibrary Solver::Options::sparse_linear_algebra_library_type

Default: The highest available according to: SUITE_SPARSE > CX_SPARSE > EIGEN_SPARSE > NO_SPARSE

Ceres supports the use of three sparse linear algebra libraries, SuiteSparse, which is enabled by setting this parameter to SUITE_SPARSECXSparse, which can be selected by setting this parameter to CX_SPARSE and Eigen which is enabled by setting this parameter to EIGEN_SPARSE. Lastly, NO_SPARSE means that no sparse linear solver should be used; note that this is irrespective of whether Ceres was compiled with support for one.

SuiteSparse is a sophisticated and complex sparse linear algebra library and should be used in general.

If your needs/platforms prevent you from using SuiteSparse, consider using CXSparse, which is a much smaller, easier to build library. As can be expected, its performance on large problems is not comparable to that of SuiteSparse.

Last but not the least you can use the sparse linear algebra routines in Eigen. Currently the performance of this library is the poorest of the three. But this should change in the near future.

Another thing to consider here is that the sparse Cholesky factorization libraries in Eigen are licensed under LGPL and building Ceres with support for EIGEN_SPARSE will result in an LGPL licensed library (since the corresponding code from Eigen is compiled into the library).

The upside is that you do not need to build and link to an external library to use EIGEN_SPARSE.

int Solver::Options::num_linear_solver_threads

Default: 1

Number of threads used by the linear solver.

能看到这里的绝对是对slam理解很深了(除了我)BA 最重要的是边缘化(marginalization )和稀疏化(sparsification)了。不懂得可以看两篇论文:

Nonlinear Graph Sparsification for SLAM

Decoupled, Consistent Node Removal and Edge Sparsification for Graph-based SLAM
还有一篇博客:大神分析的很到位啊:http://blog.csdn.net/heyijia0327/article/details/52822104


边缘化(marginalization )和稀疏化(sparsification)理论清楚之后。如何用ceres-solver来表现呢,我的好好看看



原创粉丝点击