非线性最优化(二)——高斯牛顿法和Levengerg-Marquardt迭代

来源:互联网 发布:遍历json数组 编辑:程序博客网 时间:2024/04/30 11:50

高斯牛顿法和Levengerg-Marquardt迭代都用来解决非线性最小二乘问题(nonlinear least square)。


From Wiki

The Gauss–Newton algorithm is a method used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.


Description

Given m functions r = (r1, …, rm) of n variables β = (β1, …, βn), with m ≥ n, the Gauss–Newton algorithm iteratively finds the minimum of the sum of squares

 S(\boldsymbol \beta)= \sum_{i=1}^m r_i(\boldsymbol \beta)^2.

Starting with an initial guess \boldsymbol \beta^{(0)} for the minimum, the method proceeds by the iterations

 \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_r}^\mathsf{T} \mathbf{J_r} \right)^{-1} \mathbf{ J_r} ^\mathsf{T} \mathbf{r}(\boldsymbol \beta^{(s)})

where, if r and β are column vectors, the entries of the Jacobian matrix are

 (\mathbf{J_r})_{ij} = \frac{\partial r_i (\boldsymbol \beta^{(s)})}{\partial \beta_j},

and the symbol ^\mathsf{T} denotes the matrix transpose.

If m = n, the iteration simplifies to

 \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left( \mathbf{J_r} \right)^{-1} \mathbf{r}(\boldsymbol \beta^{(s)})

which is a direct generalization of Newton's method in one dimension.

In data fitting, where the goal is to find the parameters β such that a given model function y = f(xβ) best fits some data points (xiyi), the functions riare the residuals

r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).

Then, the Gauss-Newton method can be expressed in terms of the Jacobian Jf of the function f as

 \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} + \left(\mathbf{J_f}^\mathsf{T} \mathbf{J_f} \right)^{-1} \mathbf{ J_f} ^\mathsf{T}\mathbf{r}(\boldsymbol \beta^{(s)}).

Notes

The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix JrTJr is not invertible (rank(JrTJr)=rank(Jr))and the normal equations (Δ in the "derivation from Newton's method" part) cannot be solved (at least uniquely).

The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ri. Using Taylor's theorem, we can write at every iteration:

\mathbf{r}(\boldsymbol \beta)\approx \mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta

with \Delta=\boldsymbol \beta-\boldsymbol \beta^s. The task of finding Δ minimizing the sum of squares of the right-hand side, i.e.,

\mathbf{min}\|\mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta\|_2^2,

is a linear least squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.

The normal equations are m linear simultaneous equations in the unknown increments, Δ. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of Jr. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail as JrTJr becomes singular.


Derivation from Newton's method

In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.

The recurrence relation for Newton's method for minimizing a function S of parameters, \boldsymbol\beta, is

 \boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g \,

where g denotes the gradient vector of S and H denotes the Hessian matrix of S. Since  S = \sum_{i=1}^m r_i^2, the gradient is given by

g_j=2\sum_{i=1}^m r_i\frac{\partial r_i}{\partial \beta_j}.

Elements of the Hessian are calculated by differentiating the gradient elements, g_j, with respect to \beta_k

H_{jk}=2\sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}+r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} \right).

The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by

H_{jk}\approx 2\sum_{i=1}^m J_{ij}J_{ik}

where J_{ij}=\frac{\partial r_i}{\partial \beta_j} are entries of the Jacobian Jr. The gradient and the approximate Hessian can be written in matrix notation as

\mathbf g=2\mathbf{J}_\mathbf{r}^\mathsf{T}\mathbf{r}, \quad \mathbf{H} \approx 2 \mathbf{J}_\mathbf{r}^\mathsf{T}\mathbf{J_r}.\,

These expressions are substituted into the recurrence relation above to obtain the operational equations

 \boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)}+\Delta;\quad \Delta = -\left( \mathbf{J_r}^\mathsf{T}\mathbf{J_r} \right)^{-1} \mathbf{J_r}^\mathsf{T}\mathbf{r}.

Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation

\left|r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}\right|

that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.

  1. The function values r_i are small in magnitude, at least around the minimum.
  2. The functions are only "mildly" non linear, so that \frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} is relatively small in magnitude.


Improved version

With the Gauss–Newton method the sum of squares S may not decrease at every iteration. However, since Δ is a descent direction, unless S(\boldsymbol \beta^s) is a stationary point, it holds that S(\boldsymbol \beta^s+\alpha\Delta) < S(\boldsymbol \beta^s) for all sufficiently small \alpha>0. Thus, if divergence occurs, one solution is to employ a fraction, \alpha, of the increment vector, Δ in the updating formula

 \boldsymbol \beta^{s+1} = \boldsymbol \beta^s+\alpha\  \Delta.

In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function S. An optimal value for \alpha can be found by using a line search algorithm, that is, the magnitude of \alpha is determined by finding the value that minimizes S, usually using a direct search method in the interval 0<\alpha<1.

In cases where the direction of the shift vector is such that the optimal fraction,  \alpha , is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, also known as the "trust region method".[1] The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent,

\left(\mathbf{J^TJ+\lambda D}\right)\Delta=-\mathbf{J}^T \mathbf{r},

where D is a positive diagonal matrix. Note that when D is the identity matrix and \lambda\to+\infty, then  \Delta/\lambda\to -\mathbf{J}^T \mathbf{r}, therefore the direction of Δ approaches the direction of the negative gradient  -\mathbf{J}^T \mathbf{r}.

The so-called Marquardt parameter, \lambda, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time \lambda is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of then becomes a standard Gauss–Newton minimization.

The (non-negative) damping factor, \lambda, is adjusted at each iteration. If reduction of S is rapid, a smaller value can be used, bringing the algorithm closer to the Gauss–Newton algorithm, whereas if an iteration gives insufficient reduction in the residual,\lambda can be increased, giving a step closer to the gradient descent direction. Note that the gradient of S with respect to δ equals -2(\mathbf{J}^{T} [\mathbf{y} - \mathbf{f}(\boldsymbol \beta) ] )^T. Therefore, for large values of \lambda, the step will be taken approximately in the direction of the gradient. If either the length of the calculated step, δ, or the reduction of sum of squares from the latest parameter vector, β + δ, fall below predefined limits, iteration stops and the last parameter vector, β, is considered to be the solution.

Levenberg's algorithm has the disadvantage that if the value of damping factor, \lambda, is large, inverting JTJ + \lambdaI is not used at all. Marquardt provided the insight that we can scale each component of the gradient according to the curvature so that there is larger movement along the directions where the gradient is smaller. This avoids slow convergence in the direction of small gradient. Therefore, Marquardt replaced the identity matrix, I, with the diagonal matrix consisting of the diagonal elements of JTJ, resulting in the Levenberg–Marquardt algorithm:

\mathbf{(J^T J + \lambda\, diag(J^T J))\boldsymbol \delta  = J^T [y - f(\boldsymbol \beta)]}\!.

Related algorithms

In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian,\frac{\partial^2 S}{\partial \beta_j \partial\beta_k}, is built up numerically using first derivatives \frac{\partial r_i}{\partial\beta_j} only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.

Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.


0 0
原创粉丝点击