Ceres solver中几种常用算法介绍
来源:互联网 发布:push 数组 编辑:程序博客网 时间:2024/05/18 00:50
http://homes.cs.washington.edu/~sagarwal/ceres-solver/stable/solving.html
Solving
Introduction
Effective use of Ceres requires some familiarity with the basic components of a nonlinear least squares solver, so before we describe how to configure and use the solver, we will take a brief look at how some of the core optimization algorithms in Ceres work.
Let
Here, the Jacobian
The general strategy when solving non-linear optimization problems is to solve a sequence of approximations to the original problem [NocedalWright]. At each iteration, the approximation is solved to determine a correction
Unfortunately, naively solving a sequence of these problems and updating
- Trust Region The trust region approach approximates the objective function using using a model function (often a quadratic) over a subset of the search space known as the trust region. If the model function succeeds in minimizing the true objective function the trust region is expanded; conversely, otherwise it is contracted and the model optimization problem is solved again.
- Line Search The line search approach first finds a descent direction along which the objective function will be reduced and then computes a step size that decides how far should move along that direction. The descent direction can be computed by various methods, such as gradient descent, Newton’s method and Quasi-Newton method. The step size can be determined either exactly or inexactly.
Trust region methods are in some sense dual to line search methods: trust region methods first choose a step size (the size of the trust region) and then a step direction while line search methods first choose a step direction and then a step size. Ceres implements multiple algorithms in both categories.
Trust Region Methods
The basic trust region algorithm looks something like this.
- Given an initial point
x and a trust region radiusμ .argminΔx12∥J(x)Δx+F(x)∥2 s.t.∥D(x)Δx∥2≤μ ρ=∥F(x+Δx)∥2−∥F(x)∥2∥J(x)Δx+F(x)∥2−∥F(x)∥2 - if
ρ>ϵ thenx=x+Δx .- if
ρ>η1 thenρ=2ρ - else if
ρ<η2 thenρ=0.5∗ρ - Goto 2.
Here,
The key computational step in a trust-region algorithm is the solution of the constrained optimization problem
There are a number of different ways of solving this problem, each giving rise to a different concrete trust-region algorithm. Currently Ceres, implements two trust-region algorithms - Levenberg-Marquardt and Dogleg. The user can choose between them by setting Solver::Options::trust_region_strategy_type.
Footnotes
Levenberg-Marquardt
The Levenberg-Marquardt algorithm [Levenberg] [Marquardt] is the most popular algorithm for solving non-linear least squares problems. It was also the first trust region algorithm to be developed [Levenberg] [Marquardt]. Ceres implements an exact step[Madsen] and an inexact step variant of the Levenberg-Marquardt algorithm [WrightHolt] [NashSofer].
It can be shown, that the solution to (3) can be obtained by solving an unconstrained optimization of the form
Where,
The matrix
Before going further, let us make some notational simplifications. We will assume that the matrix
For all but the smallest problems the solution of (5) in each iteration of the Levenberg-Marquardt algorithm is the dominant computational cost in Ceres. Ceres provides a number of different options for solving (5). There are two major classes of methods - factorization and iterative.
The factorization methods are based on computing an exact solution of (4) using a Cholesky or a QR factorization and lead to an exact step Levenberg-Marquardt algorithm. But it is not clear if an exact solution of (4) is necessary at each step of the LM algorithm to solve (1). In fact, we have already seen evidence that this may not be the case, as (4) is itself a regularized version of (2). Indeed, it is possible to construct non-linear optimization algorithms in which the linearized problem is solved approximately. These algorithms are known as inexact Newton or truncated Newton methods [NocedalWright].
An inexact Newton method requires two ingredients. First, a cheap method for approximately solving systems of linear equations. Typically an iterative linear solver like the Conjugate Gradients method is used for this purpose [NocedalWright]. Second, a termination rule for the iterative solver. A typical termination rule is of the form
Here,
Ceres supports both exact and inexact step solution strategies. When the user chooses a factorization based linear solver, the exact step Levenberg-Marquardt algorithm is used. When the user chooses an iterative linear solver, the inexact step Levenberg-Marquardt algorithm is used.
Dogleg
Another strategy for solving the trust region problem (3) was introduced by M. J. D. Powell. The key idea there is to compute two vectors
Note that the vector
TRADITIONAL_DOGLEG as described by Powell, constructs two line segments using the Gauss-Newton and Cauchy vectors and finds the point farthest along this line shaped like a dogleg (hence the name) that is contained in the trust-region. For more details on the exact reasoning and computations, please see Madsen et al [Madsen].
SUBSPACE_DOGLEG is a more sophisticated method that considers the entire two dimensional subspace spanned by these two vectors and finds the point that minimizes the trust region problem in this subspace [ByrdSchnabel].
The key advantage of the Dogleg over Levenberg Marquardt is that if the step computation for a particular choice of
The Dogleg method can only be used with the exact factorization based linear solvers.
Inner Iterations
Some non-linear least squares problems have additional structure in the way the parameter blocks interact that it is beneficial to modify the way the trust region step is computed. e.g., consider the following regression problem
Given a set of pairs
Notice that the expression on the left is linear in
Similar structure can be found in the matrix factorization with missing data problem. There the corresponding algorithm is known as Wiberg’s algorithm [Wiberg].
Ruhe & Wedin present an analysis of various algorithms for solving separable non-linear least squares problems and refer toVariable Projection as Algorithm I in their paper [RuheWedin].
Implementing Variable Projection is tedious and expensive. Ruhe & Wedin present a simpler algorithm with comparable convergence properties, which they call Algorithm II. Algorithm II performs an additional optimization step to estimate
This idea can be generalized to cases where the residual is not linear in
In this case, we solve for the trust region step for the full problem, and then use it as the starting point to further optimize just a_1and a_2. For the linear case, this amounts to doing a single linear least squares solve. For non-linear problems, any method for solving the a_1 and a_2 optimization problems will do. The only constraint on a_1 and a_2 (if they are two different parameter block) is that they do not co-occur in a residual block.
This idea can be further generalized, by not just optimizing
Setting Solver::Options::use_inner_iterations to true enables the use of this non-linear generalization of Ruhe & Wedin’s Algorithm II. This version of Ceres has a higher iteration complexity, but also displays better convergence behavior per iteration.
Setting Solver::Options::num_threads to the maximum number possible is highly recommended.
Non-monotonic Steps
Note that the basic trust-region algorithm described in Algorithm~ref{alg:trust-region} is a descent algorithm in that they only accepts a point if it strictly reduces the value of the objective function.
Relaxing this requirement allows the algorithm to be more efficient in the long term at the cost of some local increase in the value of the objective function.
This is because allowing for non-decreasing objective function values in a principled manner allows the algorithm to jump over boulders as the method is not restricted to move into narrow valleys while preserving its convergence properties.
Setting Solver::Options::use_nonmonotonic_steps to true enables the non-monotonic trust region algorithm as described by Conn, Gould & Toint in [Conn].
Even though the value of the objective function may be larger than the minimum value encountered over the course of the optimization, the final parameters returned to the user are the ones corresponding to the minimum cost over all iterations.
The option to take non-monotonic steps is available for all trust region strategies.
Line Search Methods
The implementation of line search algorithms in Ceres Solver is fairly new and not very well tested, so for now this part of the solver should be considered beta quality. We welcome reports of your experiences both good and bad on the mailinglist.
Line search algorithms
- Given an initial point
x Δx=−H−1(x)g(x) argminμ12∥F(x+μΔx)∥2 x=x+μΔx - Goto 2.
Here
Step 4, which is a one dimensional optimization or Line Search along
Different line search algorithms differ in their choice of the search direction
- STEEPEST_DESCENT This corresponds to choosing
H(x) to be the identity matrix. This is not a good search direction for anything but the simplest of the problems. It is only included here for completeness. - NONLINEAR_CONJUGATE_GRADIENT A generalization of the Conjugate Gradient method to non-linear functions. The generalization can be performed in a number of different ways, resulting in a variety of search directions. Ceres Solver currently supports FLETCHER_REEVES, POLAK_RIBIRERE and HESTENES_STIEFEL directions.
- BFGS A generalization of the Secant method to multiple dimensions in which a full, dense approximation to the inverse Hessian is maintained and used to compute a quasi-Newton step [NocedalWright]. BFGS is currently the best known general quasi-Newton algorithm.
- LBFGS A limited memory approximation to the full BFGS method in which the last M iterations are used to approximate the inverse Hessian used to compute a quasi-Newton step [Nocedal], [ByrdNocedal].
Currently Ceres Solver supports both a backtracking and interpolation based Armijo line search algorithm, and a sectioning / zoom interpolation (strong) Wolfe condition line search algorithm. However, note that in order for the assumptions underlying theBFGS and LBFGS methods to be guaranteed to be satisfied the Wolfe line search algorithm should be used.
LinearSolver
Recall that in both of the trust-region methods described above, the key computational cost is the solution of a linear least squares problem of the form
Let
Ceres provides a number of different options for solving (8).
DENSE_QR
For small problems (a couple of hundred parameters and a few thousand residuals) with relatively dense Jacobians, DENSE_QR is the method of choice [Bjorck]. Let
Ceres uses Eigen ‘s dense QR factorization routines.
DENSE_NORMAL_CHOLESKY & SPARSE_NORMAL_CHOLESKY
Large non-linear least square problems are usually sparse. In such cases, using a dense QR factorization is inefficient. Let
The observant reader will note that the
DENSE_NORMAL_CHOLESKY as the name implies performs a dense Cholesky factorization of the normal equations. Ceres usesEigen ‘s dense LDLT factorization routines.
SPARSE_NORMAL_CHOLESKY, as the name implies performs a sparse Cholesky factorization of the normal equations. This leads to substantial savings in time and memory for large sparse problems. Ceres uses the sparse Cholesky factorization routines in Professor Tim Davis’ SuiteSparse or CXSparse packages [Chen].
DENSE_SCHUR & SPARSE_SCHUR
While it is possible to use SPARSE_NORMAL_CHOLESKY to solve bundle adjustment problems, bundle adjustment problem have a special structure, and a more efficient scheme for solving (8) can be constructed.
Suppose that the SfM problem consists of
A key characteristic of the bundle adjustment problem is that there is no term
where,
and apply Gaussian elimination to it. As we noted above,
The matrix
is the Schur complement of
Now, eq-linear2 can be solved by first forming
This still leaves open the question of solving (11). The method of choice for solving symmetric positive definite systems exactly is via the Cholesky factorization [TrefethenBau] and depending upon the structure of the matrix, there are, in general, two options. The first is direct factorization, where we store and factor
But,
CGNR
For general sparse problems, if the problem is too large for CHOLMOD or a sparse linear algebra library is not linked into Ceres, another option is the CGNR solver. This solver uses the Conjugate Gradients solver on the normal equations, but without forming the normal equations explicitly. It exploits the relation
When the user chooses ITERATIVE_SCHUR as the linear solver, Ceres automatically switches from the exact step algorithm to an inexact step algorithm.
ITERATIVE_SCHUR
Another option for bundle adjustment problems is to apply PCG to the reduced camera matrix
The cost of forming and storing the Schur complement
Thus, we can run PCG on
Equation (12) is closely related to Domain Decomposition methods for solving large linear systems that arise in structural engineering and partial differential equations. In the language of Domain Decomposition, each point in a bundle adjustment problem is a domain, and the cameras form the interface between these domains. The iterative solution of the Schur complement then falls within the sub-category of techniques known as Iterative Sub-structuring [Saad] [Mathew].
Preconditioner
The convergence rate of Conjugate Gradients for solving (8) depends on the distribution of eigenvalues of
The solution to this problem is to replace (8) with a preconditioned system. Given a linear system,
The computational cost of using a preconditioner
The simplest of all preconditioners is the diagonal or Jacobi preconditioner, i.e.,
For ITERATIVE_SCHUR there are two obvious choices for block diagonal preconditioners for
For bundle adjustment problems arising in reconstruction from community photo collections, more effective preconditioners can be constructed by analyzing and exploiting the camera-point visibility structure of the scene [KushalAgarwal]. Ceres implements the two visibility based preconditioners described by Kushal & Agarwal as CLUSTER_JACOBI and CLUSTER_TRIDIAGONAL. These are fairly new preconditioners and Ceres’ implementation of them is in its early stages and is not as mature as the other preconditioners described above.
Ordering
The order in which variables are eliminated in a linear solver can have a significant of impact on the efficiency and accuracy of the method. For example when doing sparse Cholesky factorization, there are matrices for which a good ordering will give a Cholesky factor with
Ceres allows the user to provide varying amounts of hints to the solver about the variable elimination ordering to use. This can range from no hints, where the solver is free to decide the best ordering based on the user’s choices like the linear solver being used, to an exact order in which the variables should be eliminated, and a variety of possibilities in between.
Instances of the ParameterBlockOrdering class are used to communicate this information to Ceres.
Formally an ordering is an ordered partitioning of the parameter blocks. Each parameter block belongs to exactly one group, and each group has a unique integer associated with it, that determines its order in the set of groups. We call these groupsElimination Groups
Given such an ordering, Ceres ensures that the parameter blocks in the lowest numbered elimination group are eliminated first, and then the parameter blocks in the next lowest numbered elimination group and so on. Within each elimination group, Ceres is free to order the parameter blocks as it chooses. e.g. Consider the linear system
There are two ways in which it can be solved. First eliminating
{0:x},{1:y} : Eliminatex first.{0:y},{1:x} : Eliminatey first.{0:x,y} : Solver gets to decide the elimination order.
Thus, to have Ceres determine the ordering automatically using heuristics, put all the variables in the same elimination group. The identity of the group does not matter. This is the same as not specifying an ordering at all. To control the ordering for every variable, create an elimination group per variable, ordering them in the desired order.
If the user is using one of the Schur solvers (DENSE_SCHUR, SPARSE_SCHUR, ITERATIVE_SCHUR) and chooses to specify an ordering, it must have one important property. The lowest numbered elimination group must form an independent set in the graph corresponding to the Hessian, or in other words, no two parameter blocks in in the first elimination group should co-occur in the same residual block. For the best performance, this elimination group should be as large as possible. For standard bundle adjustment problems, this corresponds to the first elimination group containing all the 3d points, and the second containing the all the cameras parameter blocks.
If the user leaves the choice to Ceres, then the solver uses an approximate maximum independent set algorithm to identify the first elimination group [LiSaad].
Solver::Options
- class Solver::Options
Solver::Options controls the overall behavior of the solver. We list the various settings and their default values below.
- MinimizerType Solver::Options::minimizer_type
Default: TRUST_REGION
Choose between LINE_SEARCH and TRUST_REGION algorithms. See Trust Region Methods and Line Search Methods for more details.
- LineSearchDirectionType Solver::Options::line_search_direction_type
Default: LBFGS
Choices are STEEPEST_DESCENT, NONLINEAR_CONJUGATE_GRADIENT, BFGS and LBFGS.
- LineSearchType Solver::Options::line_search_type
Default: WOLFE
Choices are ARMIJO and WOLFE (strong Wolfe conditions). Note that in order for the assumptions underlying the BFGS andLBFGS line search direction algorithms to be guaranteed to be satisifed, the WOLFE line search should be used.
- NonlinearConjugateGradientType Solver::Options::nonlinear_conjugate_gradient_type
Default: FLETCHER_REEVES
Choices are FLETCHER_REEVES, POLAK_RIBIRERE and HESTENES_STIEFEL.
- int Solver::Options::max_lbfs_rank
Default: 20
The L-BFGS hessian approximation is a low rank approximation to the inverse of the Hessian matrix. The rank of the approximation determines (linearly) the space and time complexity of using the approximation. Higher the rank, the better is the quality of the approximation. The increase in quality is however is bounded for a number of reasons.
- The method only uses secant information and not actual derivatives.
- The Hessian approximation is constrained to be positive definite.
So increasing this rank to a large number will cost time and space complexity without the corresponding increase in solution quality. There are no hard and fast rules for choosing the maximum rank. The best choice usually requires some problem specific experimentation.
- bool Solver::Options::use_approximate_eigenvalue_bfgs_scaling
Default: false
As part of the BFGS update step / LBFGS right-multiply step, the initial inverse Hessian approximation is taken to be the Identity. However, [Oren] showed that using instead
I∗γ , whereγ is a scalar chosen to approximate an eigenvalue of the true inverse Hessian can result in improved convergence in a wide variety of cases. Settinguse_approximate_eigenvalue_bfgs_scaling to true enables this scaling in BFGS (before first iteration) andLBFGS (at each iteration).Precisely, approximate eigenvalue scaling equates to
γ=y′ksky′kyk With:
yk=∇fk+1−∇fk sk=xk+1−xk Where
f() is the line search objective andx the vector of parameter values [NocedalWright].It is important to note that approximate eigenvalue scaling does not always improve convergence, and that it can in factsignificantly degrade performance for certain classes of problem, which is why it is disabled by default. In particular it can degrade performance when the sensitivity of the problem to different parameters varies significantly, as in this case a single scalar factor fails to capture this variation and detrimentally downscales parts of the Jacobian approximation which correspond to low-sensitivity parameters. It can also reduce the robustness of the solution to errors in the Jacobians.
- LineSearchIterpolationType Solver::Options::line_search_interpolation_type
Default: CUBIC
Degree of the polynomial used to approximate the objective function. Valid values are BISECTION, QUADRATIC and CUBIC.
- double Solver::Options::min_line_search_step_size
The line search terminates if:
∥Δxk∥∞<min_line_search_step_size where
∥⋅∥∞ refers to the max norm, andΔxk is the step change in the parameter values at thek -th iteration.
- double Solver::Options::line_search_sufficient_function_decrease
Default: 1e-4
Solving the line search problem exactly is computationally prohibitive. Fortunately, line search based optimization algorithms can still guarantee convergence if instead of an exact solution, the line search algorithm returns a solution which decreases the value of the objective function sufficiently. More precisely, we are looking for a step size s.t.
f(step_size)≤f(0)+sufficient_decrease∗[f′(0)∗step_size] This condition is known as the Armijo condition.
- double Solver::Options::max_line_search_step_contraction
Default: 1e-3
In each iteration of the line search,
new_step_size>=max_line_search_step_contraction∗step_size Note that by definition, for contraction:
0<max_step_contraction<min_step_contraction<1
- double Solver::Options::min_line_search_step_contraction
Default: 0.6
In each iteration of the line search,
new_step_size<=min_line_search_step_contraction∗step_size Note that by definition, for contraction:
0<max_step_contraction<min_step_contraction<1
- int Solver::Options::max_num_line_search_step_size_iterations
Default: 20
Maximum number of trial step size iterations during each line search, if a step size satisfying the search conditions cannot be found within this number of trials, the line search will stop.
As this is an ‘artificial’ constraint (one imposed by the user, not the underlying math), if WOLFE line search is being used, andpoints satisfying the Armijo sufficient (function) decrease condition have been found during the current search (in
<= max_num_line_search_step_size_iterations). Then, the step size with the lowest function value which satisfies the Armijo condition will be returned as the new valid step, even though it does not satisfy the strong Wolfe conditions. This behaviour protects against early termination of the optimizer at a sub-optimal point.
- int Solver::Options::max_num_line_search_direction_restarts
Default: 5
Maximum number of restarts of the line search direction algorithm before terminating the optimization. Restarts of the line search direction algorithm occur when the current algorithm fails to produce a new descent direction. This typically indicates a numerical failure, or a breakdown in the validity of the approximations used.
- double Solver::Options::line_search_sufficient_curvature_decrease
Default: 0.9
The strong Wolfe conditions consist of the Armijo sufficient decrease condition, and an additional requirement that the step size be chosen s.t. the magnitude (‘strong’ Wolfe conditions) of the gradient along the search direction decreases sufficiently. Precisely, this second condition is that we seek a step size s.t.
∥f′(step_size)∥<=sufficient_curvature_decrease∗∥f′(0)∥ Where
f() is the line search objective andf′() is the derivative off with respect to the step size:dfd step size .
- double Solver::Options::max_line_search_step_expansion
Default: 10.0
During the bracketing phase of a Wolfe line search, the step size is increased until either a point satisfying the Wolfe conditions is found, or an upper bound for a bracket containinqg a point satisfying the conditions is found. Precisely, at each iteration of the expansion:
new_step_size<=max_step_expansion∗step_size By definition for expansion
max_step_expansion>1.0
- TrustRegionStrategyType Solver::Options::trust_region_strategy_type
Default: LEVENBERG_MARQUARDT
The trust region step computation algorithm used by Ceres. Currently LEVENBERG_MARQUARDT and DOGLEG are the two valid choices. See Levenberg-Marquardt and Dogleg for more details.
- DoglegType Solver::Options::dogleg_type
Default: TRADITIONAL_DOGLEG
Ceres supports two different dogleg strategies. TRADITIONAL_DOGLEG method by Powell and the SUBSPACE_DOGLEG method described by [ByrdSchnabel] . See Dogleg for more details.
- bool Solver::Options::use_nonmonotonic_steps
Default: false
Relax the requirement that the trust-region algorithm take strictly decreasing steps. See Non-monotonic Steps for more details.
- int Solver::Options::max_consecutive_nonmonotonic_steps
Default: 5
The window size used by the step selection algorithm to accept non-monotonic steps.
- int Solver::Options::max_num_iterations
Default: 50
Maximum number of iterations for which the solver should run.
- double Solver::Options::max_solver_time_in_seconds
Default: 1e6 Maximum amount of time for which the solver should run.
- int Solver::Options::num_threads
Default: 1
Number of threads used by Ceres to evaluate the Jacobian.
- double Solver::Options::initial_trust_region_radius
Default: 1e4
The size of the initial trust region. When the LEVENBERG_MARQUARDT strategy is used, the reciprocal of this number is the initial regularization parameter.
- double Solver::Options::max_trust_region_radius
Default: 1e16
The trust region radius is not allowed to grow beyond this value.
- double Solver::Options::min_trust_region_radius
Default: 1e-32
The solver terminates, when the trust region becomes smaller than this value.
- double Solver::Options::min_relative_decrease
Default: 1e-3
Lower threshold for relative decrease before a trust-region step is accepted.
- double Solver::Options::min_lm_diagonal
Default: 1e6
The LEVENBERG_MARQUARDT strategy, uses a diagonal matrix to regularize the the trust region step. This is the lower bound on the values of this diagonal matrix.
- double Solver::Options::max_lm_diagonal
Default: 1e32
The LEVENBERG_MARQUARDT strategy, uses a diagonal matrix to regularize the the trust region step. This is the upper bound on the values of this diagonal matrix.
- int Solver::Options::max_num_consecutive_invalid_steps
Default: 5
The step returned by a trust region strategy can sometimes be numerically invalid, usually because of conditioning issues. Instead of crashing or stopping the optimization, the optimizer can go ahead and try solving with a smaller trust region/better conditioned problem. This parameter sets the number of consecutive retries before the minimizer gives up.
- double Solver::Options::function_tolerance
Default: 1e-6
Solver terminates if
|Δcost|cost<function_tolerance where,
Δcost is the change in objective function value (up or down) in the current iteration of Levenberg-Marquardt.
- double Solver::Options::gradient_tolerance
Default: 1e-10
Solver terminates if
- Ceres solver中几种常用算法介绍
- Ceres solver
- Ceres-Solver库入门
- Ceres solver tutorial
- Windows 配置 Ceres-solver
- VS2013 ceres-solver编译
- ceres-solver拟合椭球
- ceres solver 学习笔记
- ceres-solver使用
- ceres solver使用
- Ceres Solver for android
- Ceres Solver使用
- Ceres(5): Solver
- ceres-solver库编译说明
- ceres-solver库使用示例
- Ceres-Solver库下载网址
- ceres-solver库编译说明
- Cartographer ROS ceres-solver fail
- Tomcat7 采用service.bat 注册window服务
- LaTeX中算法环境设置
- rman中如何制定删除某段时间的归档日志
- 纯php多文件上传
- Binary Tree Maximum Path Sum (LeetCode)
- Ceres solver中几种常用算法介绍
- IOS源码分享
- 新浪微薄像个傻逼一样
- boost::bind函数原理和使用
- linux下创建和删除软、硬链接
- 【特别推荐】2013年最受欢迎的10篇前端开发博文
- ITL flag的含义
- SAP生产订单预留
- spring各种邮件发送