Ceres solver tutorial

来源:互联网 发布:淘宝卖家公益怎么收费 编辑:程序博客网 时间:2024/06/07 07:49

http://homes.cs.washington.edu/~sagarwal/ceres-solver/stable/tutorial.html#chapter-tutorial

http://www.mcs.anl.gov/~anitescu/CLASSES/2012/LECTURES/S310-2012-lect8.pdf

http://www.mcs.anl.gov/~anitescu/CLASSES/2012/LECTURES/S310-2012-lect11.pdf

Tutorial

Ceres solves robustified non-linear least squares problems of the form

(1)
12i=1ρi(fi(xi1,...,xik)2).

The expression ρi(fi(xi1,...,xik)2) is known as a ResidualBlock, where fi() is a CostFunction that depends on the parameter blocks [xi1,...,xik]. In most optimization problems small groups of scalars occur together. For example the three components of a translation vector and the four components of the quaternion that define the pose of a camera. We refer to such a group of small scalars as a ParameterBlock. Of course a ParameterBlock can just be a single parameter.

ρi is a LossFunction. A LossFunction is a scalar function that is used to reduce the influence of outliers on the solution of non-linear least squares problems. As a special case, when ρi(x)=x, i.e., the identity function, we get the more familiar non-linear least squares problem.

(2)
12i=1fi(xi1,...,xik)2.

In this chapter we will learn how to solve (1) using Ceres Solver. Full working code for all the examples described in this chapter and more can be found in the examples directory.

Hello World!

To get started, consider the problem of finding the minimum of the function

12(10x)2.

This is a trivial problem, whose minimum is located at x=10, but it is a good place to start to illustrate the basics of solving a problem with Ceres [1].

The first step is to write a functor that will evaluate this the function f(x)=10x:

struct CostFunctor {   template <typename T>   bool operator()(const T* const x, T* residual) const {     residual[0] = T(10.0) - x[0];     return true;   }};

The important thing to note here is that operator() is a templated method, which assumes that all its inputs and outputs are of some type T. The reason for using templates here is because Ceres will call CostFunctor::operator<T>(), with T=double when just the residual is needed, and with a special type T=Jet when the Jacobians are needed. In Derivatives we discuss the various ways of supplying derivatives to Ceres in more detail.

Once we have a way of computing the residual function, it is now time to construct a non-linear least squares problem using it and have Ceres solve it.

int main(int argc, char** argv) {  google::InitGoogleLogging(argv[0]);  // The variable to solve for with its initial value.  double initial_x = 5.0;  double x = initial_x;  // Build the problem.  Problem problem;  // Set up the only cost function (also known as residual). This uses  // auto-differentiation to obtain the derivative (jacobian).  CostFunction* cost_function =      new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);  problem.AddResidualBlock(cost_function, NULL, &x);  // Run the solver!  Solver::Options options;  options.linear_solver_type = ceres::DENSE_QR;  options.minimizer_progress_to_stdout = true;  Solver::Summary summary;  Solve(options, &problem, &summary);  std::cout << summary.BriefReport() << "\n";  std::cout << "x : " << initial_x            << " -> " << x << "\n";  return 0;}

AutoDiffCostFunction takes a CostFunctor as input, automatically differentiates it and gives it a CostFunction interface.

Compiling and running examples/helloworld.cc gives us

   0: f: 1.250000e+01 d: 0.00e+00 g: 5.00e+00 h: 0.00e+00 rho: 0.00e+00 mu: 1.00e+04 li:  0 it: 6.91e-06 tt: 1.91e-03   1: f: 1.249750e-07 d: 1.25e+01 g: 5.00e-04 h: 5.00e+00 rho: 1.00e+00 mu: 3.00e+04 li:  1 it: 2.81e-05 tt: 1.99e-03   2: f: 1.388518e-16 d: 1.25e-07 g: 1.67e-08 h: 5.00e-04 rho: 1.00e+00 mu: 9.00e+04 li:  1 it: 1.00e-05 tt: 2.01e-03Ceres Solver Report: Iterations: 2, Initial cost: 1.250000e+01, Final cost: 1.388518e-16, Termination: PARAMETER_TOLERANCE.x : 5 -> 10

Starting from a x=5, the solver in two iterations goes to 10 [2]. The careful reader will note that this is a linear problem and one linear solve should be enough to get the optimal value. The default configuration of the solver is aimed at non-linear problems, and for reasons of simplicity we did not change it in this example. It is indeed possible to obtain the solution to this problem using Ceres in one iteration. Also note that the solver did get very close to the optimal function value of 0 in the very first iteration. We will discuss these issues in greater detail when we talk about convergence and parameter settings for Ceres.

Footnotes

[1]examples/helloworld.cc[2]Actually the solver ran for three iterations, and it was by looking at the value returned by the linear solver in the third iteration, it observed that the update to the parameter block was too small and declared convergence. Ceres only prints out the display at the end of an iteration, and terminates as soon as it detects convergence, which is why you only see two iterations here and not three.

Derivatives

Ceres Solver like most optimization packages, depends on being able to evaluate the value and the derivatives of each term in the objective function at arbitrary parameter values. Doing so correctly and efficiently is essential to getting good results. Ceres Solver provides a number of ways of doing so. You have already seen one of them in action – Automatic Differentiation inexamples/helloworld.cc

We now consider the other two possibilities. Analytic and numeric derivatives.

Numeric Derivatives

In some cases, its not possible to define a templated cost functor, for example when the evaluation of the residual involves a call to a library function that you do not have control over. In such a situation, numerical differentiation can be used. The user defines a functor which computes the residual value and construct a NumericDiffCostFunction using it. e.g., for f(x)=10x the corresponding functor would be

struct NumericDiffCostFunctor {  bool operator()(const double* const x, double* residual) const {    residual[0] = 10.0 - x[0];    return true;  }};

Which is added to the Problem as:

CostFunction* cost_function =  new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1, 1>(      new NumericDiffCostFunctor)problem.AddResidualBlock(cost_function, NULL, &x);

Notice the parallel from when we were using automatic differentiation

CostFunction* cost_function =    new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);problem.AddResidualBlock(cost_function, NULL, &x);

The construction looks almost identical to the one used for automatic differentiation, except for an extra template parameter that indicates the kind of finite differencing scheme to be used for computing the numerical derivatives [3]. For more details see the documentation for NumericDiffCostFunction.

Generally speaking we recommend automatic differentiation instead of numeric differentiation. The use of C++ templates makes automatic differentiation efficient, whereas numeric differentiation is expensive, prone to numeric errors, and leads to slower convergence.

Analytic Derivatives

In some cases, using automatic differentiation is not possible. For example, it may be the case that it is more efficient to compute the derivatives in closed form instead of relying on the chain rule used by the automatic differentiation code.

In such cases, it is possible to supply your own residual and jacobian computation code. To do this, define a subclass ofCostFunction or SizedCostFunction if you know the sizes of the parameters and residuals at compile time. Here for example isSimpleCostFunction that implements f(x)=10x.

class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> { public:  virtual ~QuadraticCostFunction() {}  virtual bool Evaluate(double const* const* parameters,                        double* residuals,                        double** jacobians) const {    const double x = parameters[0][0];    residuals[0] = 10 - x;    // Compute the Jacobian if asked for.    if (jacobians != NULL && jacobians[0] != NULL) {      jacobians[0][0] = -1;    }    return true;  }};

SimpleCostFunction::Evaluate is provided with an input array of parameters, an output array residuals for residuals and an output array jacobians for Jacobians. The jacobians array is optional, Evaluate is expected to check when it is non-null, and if it is the case then fill it with the values of the derivative of the residual function. In this case since the residual function is linear, the Jacobian is constant [4] .

As can be seen from the above code fragments, implementing CostFunction objects is a bit tedious. We recommend that unless you have a good reason to manage the jacobian computation yourself, you use AutoDiffCostFunction orNumericDiffCostFunction to construct your residual blocks.

More About Derivatives

Computing derivatives is by far the most complicated part of using Ceres, and depending on the circumstance the user may need more sophisticated ways of computing derivatives. This section just scratches the surface of how derivatives can be supplied to Ceres. Once you are comfortable with using NumericDiffCostFunction and AutoDiffCostFunction we recommend taking a look at DynamicAutoDiffCostFunctionCostFunctionToFunctorNumericDiffFunctor and ConditionedCostFunctionfor more advanced ways of constructing and computing cost functions.

Footnotes

[3]examples/helloworld_numeric_diff.cc.[4]examples/helloworld_analytic_diff.cc.

Powell’s Function

Consider now a slightly more complicated example – the minimization of Powell’s function. Let x=[x1,x2,x3,x4] and

f1(x)f2(x)f3(x)f4(x)F(x)=x1+10x2=5(x3x4)=(x22x3)2=10(x1x4)2=[f1(x), f2(x), f3(x), f4(x)]

F(x) is a function of four parameters, has four residuals and we wish to find x such that 12F(x)2 is minimized.

Again, the first step is to define functors that evaluate of the terms in the objective functor. Here is the code for evaluating f4(x1,x4):

struct F4 {  template <typename T>  bool operator()(const T* const x1, const T* const x4, T* residual) const {    residual[0] = T(sqrt(10.0)) * (x1[0] - x4[0]) * (x1[0] - x4[0]);    return true;  }};

Similarly, we can define classes F1F2 and F4 to evaluate f1(x1,x2)f2(x3,x4) and f3(x2,x3) respectively. Using these, the problem can be constructed as follows:

double x1 =  3.0; double x2 = -1.0; double x3 =  0.0; double x4 = 1.0;Problem problem;// Add residual terms to the problem using the using the autodiff// wrapper to get the derivatives automatically.problem.AddResidualBlock(  new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), NULL, &x1, &x2);problem.AddResidualBlock(  new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), NULL, &x3, &x4);problem.AddResidualBlock(  new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), NULL, &x2, &x3)problem.AddResidualBlock(  new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), NULL, &x1, &x4);

Note that each ResidualBlock only depends on the two parameters that the corresponding residual object depends on and not on all four parameters. Compiling and running examples/powell.cc gives us:

Initial x1 = 3, x2 = -1, x3 = 0, x4 = 1   0: f: 1.075000e+02 d: 0.00e+00 g: 1.55e+02 h: 0.00e+00 rho: 0.00e+00 mu: 1.00e+04 li:  0 it: 0.00e+00 tt: 0.00e+00   1: f: 5.036190e+00 d: 1.02e+02 g: 2.00e+01 h: 2.16e+00 rho: 9.53e-01 mu: 3.00e+04 li:  1 it: 0.00e+00 tt: 0.00e+00   2: f: 3.148168e-01 d: 4.72e+00 g: 2.50e+00 h: 6.23e-01 rho: 9.37e-01 mu: 9.00e+04 li:  1 it: 0.00e+00 tt: 0.00e+00   3: f: 1.967760e-02 d: 2.95e-01 g: 3.13e-01 h: 3.08e-01 rho: 9.37e-01 mu: 2.70e+05 li:  1 it: 0.00e+00 tt: 0.00e+00   4: f: 1.229900e-03 d: 1.84e-02 g: 3.91e-02 h: 1.54e-01 rho: 9.37e-01 mu: 8.10e+05 li:  1 it: 0.00e+00 tt: 0.00e+00   5: f: 7.687123e-05 d: 1.15e-03 g: 4.89e-03 h: 7.69e-02 rho: 9.37e-01 mu: 2.43e+06 li:  1 it: 0.00e+00 tt: 0.00e+00   6: f: 4.804625e-06 d: 7.21e-05 g: 6.11e-04 h: 3.85e-02 rho: 9.37e-01 mu: 7.29e+06 li:  1 it: 0.00e+00 tt: 0.00e+00   7: f: 3.003028e-07 d: 4.50e-06 g: 7.64e-05 h: 1.92e-02 rho: 9.37e-01 mu: 2.19e+07 li:  1 it: 0.00e+00 tt: 0.00e+00   8: f: 1.877006e-08 d: 2.82e-07 g: 9.54e-06 h: 9.62e-03 rho: 9.37e-01 mu: 6.56e+07 li:  1 it: 0.00e+00 tt: 0.00e+00   9: f: 1.173223e-09 d: 1.76e-08 g: 1.19e-06 h: 4.81e-03 rho: 9.37e-01 mu: 1.97e+08 li:  1 it: 0.00e+00 tt: 0.00e+00  10: f: 7.333425e-11 d: 1.10e-09 g: 1.49e-07 h: 2.40e-03 rho: 9.37e-01 mu: 5.90e+08 li:  1 it: 0.00e+00 tt: 0.00e+00  11: f: 4.584044e-12 d: 6.88e-11 g: 1.86e-08 h: 1.20e-03 rho: 9.37e-01 mu: 1.77e+09 li:  1 it: 0.00e+00 tt: 0.00e+00Ceres Solver Report: Iterations: 12, Initial cost: 1.075000e+02, Final cost: 4.584044e-12, Termination: GRADIENT_TOLERANCE.Final x1 = 0.00116741, x2 = -0.000116741, x3 = 0.000190535, x4 = 0.000190535

It is easy to see that the optimal solution to this problem is at x1=0,x2=0,x3=0,x4=0 with an objective function value of 0. In 10 iterations, Ceres finds a solution with an objective function value of 4×1012.

Footnotes

[5]examples/powell.cc.

Curve Fitting

The examples we have seen until now are simple optimization problems with no data. The original purpose of least squares and non-linear least squares analysis was fitting curves to data. It is only appropriate that we now consider an example of such a problem [6]. It contains data generated by sampling the curve y=e0.3x+0.1 and adding Gaussian noise with standard deviation σ=0.2. Let us fit some data to the curve

y=emx+c.

We begin by defining a templated object to evaluate the residual. There will be a residual for each observation.

struct ExponentialResidual {  ExponentialResidual(double x, double y)      : x_(x), y_(y) {}  template <typename T>  bool operator()(const T* const m, const T* const c, T* residual) const {    residual[0] = T(y_) - exp(m[0] * T(x_) + c[0]);    return true;  } private:  // Observations for a sample.  const double x_;  const double y_;};

Assuming the observations are in a 2n sized array called data the problem construction is a simple matter of creating aCostFunction for every observation.

double m = 0.0;

0 0
原创粉丝点击