l0-Norm, l1-Norm, l2-Norm, … , l-infinity Norm

来源:互联网 发布:homer软件下载 编辑:程序博客网 时间:2024/06/04 18:18

I’m working on things related to norm a lot lately and it is time to talk about it. In this post we are going to discuss about a whole family of norm.

What is a norm?

Mathematically a norm is a total size or length of all vectors in a vector space  or matrices. For simplicity, we can say that the higher the norm is, the bigger the (value in) matrix or vector is. Norm may come in many forms and many names, including these popular name: Euclidean distance,Mean-squared Error, etc.

Most of the time you will see the norm appears in a equation like this:

\left \| x \right \| wherex can be a vector or a matrix.

For example, a Euclidean norm of a vectora = \begin{bmatrix}  3  \\  -2  \\  1  \end{bmatrix} is \left \| a \right \|_2=\sqrt{3^2+(-2)^2+1^2}=3.742 which is the size of vector a

The above example shows how to compute a Euclidean norm, or formally called anl_2-norm. There are many other types of norm that beyond our explanation here, actually for every singlereal number, there is a norm correspond to it (Notice the emphasised word real number, that means it not limited to only integer.)

Formally the l_p-norm ofx is defined as:

\left \| x \right \|_p = \sqrt[p]{\sum_{i}\left | x_i \right |^p}  where p \epsilon \mathbb{R}

That’s it! A p-th-root of a summation of all elements to the p-th power is what we call a norm.

The interesting point is even though everyl_p-norm is all look  very similar to each other, their mathematical properties are very different and thus their application are dramatically different too. Hereby we are going to look into some of these norms in details.

l0-norm 

The first norm we are going to discuss is al_0-norm. By definition,l_0-norm ofx is

\left \| x \right \|_0 = \sqrt[0]{\sum_{i}x_i^0}

Strictly speaking, l_0-norm is not actually a norm. It is a cardinality function which has its definition in the form of l_p-norm, though many people call it a norm. It is a bit tricky to work with because there is a presence of zeroth-power and zeroth-root in it. Obviously any x>0 will become one, but the problems of the definition of zeroth-power and especially zeroth-root is messing things around here. So in reality, most mathematicians and engineers use this definition ofl_0-norm instead:

\left \| x \right \|_0 = \#(i | x_i \neq 0)

that is a total number of non-zero elements in a vector.

Because it is a number of non-zero element, there is so many applications that usel_0-norm. Lately it is even more in focus because of the rise of the Compressive Sensing scheme, which is try to find the sparsest solution of the under-determined linear system. The sparsest solution means the solution which has fewest non-zero entries, i.e. the lowestl_0-norm. This problem is usually regarding as a optimisation problem ofl_0-norm orl_0-optimisation.

l0-optimisation

Many application, including Compressive Sensing, try to minimise thel_0-norm of a vector corresponding to some constraints, hence called “l_0-minimisation”. A standard minimisation problem is formulated as:

min \left \| x \right \|_0 subject to Ax = b

However, doing so is not an easy task. Because the lack ofl_0-norm’s mathematical representation,l_0-minimisation is regarded by computer scientist as an NP-hard problem, simply says that it’s too complex and almost impossible to solve.

In many case, l_0-minimisation problem is relaxed to be higher-order norm problem such asl_1-minimisation andl_2-minimisation.

l1-norm

Following the definition of norm,l_1-norm ofx is defined as

\left \| x \right \|_1 = \sum_{i} \left | x_i \right |

This norm is quite common among the norm family. It has many name and many forms among various fields, namely Manhattan norm is it’s nickname. If thel_1-norm is computed for a difference between two vectors or matrices, that is

SAD(x_1,x_2) = \left \| x_1-x_2 \right \|_1 = \sum \left | x_{1_i}-x_{2_i} \right |

it is called Sum of Absolute Difference (SAD) among computer vision scientists.

In more general case of signal difference measurement, it may be scaled to a unit vector by:

MAE(x_1,x_2) = \frac{1}{n} \left \| x_1-x_2 \right \|_1 = \frac {1} {n} \sum \left | x_{1_i} - x_{2_i} \right | where n is a size ofx.

which is known as Mean-Absolute Error (MAE).

l2-norm

The most popular of all norm is thel_2-norm. It is used in almost every field of engineering and science as a whole. Following the basic definition,l_2-norm is defined as

\left \| x \right \|_2 = \sqrt{\sum_{i}x_i^2}

l_2-norm is well known as aEuclidean norm, which is used as a standard quantity for measuring a vector difference. As inl_1-norm, if the Euclidean norm is computed for a vector difference, it is known as aEuclidean distance:

\left \| x_1-x_2 \right \|_2 = \sqrt{\sum_i (x_{1_i}-x_{2_i})^2}

or in its squared form, known as aSum of Squared Difference (SSD) among Computer Vision scientists:

SSD(x_1,x_2) = \left \| x_1-x_2 \right \|_2^2 = \sum_i (x_{1_i}-x_{2_i})^2

It’s most well known application in the signal processing field is theMean-Squared Error (MSE) measurement, which is used to compute a similarity, a quality, or a  correlation between two signals. MSE is

MSE(x_1,x_2) = \frac{1}{n} \left \| x_1-x_2 \right \|_2^2 = \frac{1}{n} \sum_i (x_{1_i}-x_{2_i})^2

As previously discussed inl_0-optimisation section, because of many issues from both a computational view and a mathematical view, many l_0-optimisation problems relax themselves to become l_1– andl_2-optimisation instead. Because of this, we will now discuss about the optimisation ofl_2.

l2-optimisation

As in l_0-optimisation case, the problem of minimisingl_2-norm is formulated by

min \left \| x \right \|_2 subject to Ax = b

Assume that the constraint matrixA has full rank, this problem is now a underdertermined system which has infinite solutions. The goal in this case is to draw out the best solution, i.e. has lowest l_2-norm, from these infinitely many solutions. This could be a very tedious work if it was to be computed directly. Luckily it is a mathematical trick that can help us a lot in this work.

By using a trick of Lagrange multipliers, we can then define a Lagrangian

\mathfrak{L}(\boldsymbol{x}) = \left \| \boldsymbol{x} \right \|_2^2+\lambda^{T}(\boldsymbol{Ax}-\boldsymbol{b})

where \lambda is the introduced Lagrange multipliers. Take derivative of this equation equal to zero to find a optimal solution and get

\hat{\boldsymbol{x}}_{opt} = -\frac{1}{2} \boldsymbol{A}^{T} \lambda

plug this solution into the constraint to get

\boldsymbol{A}\hat{\boldsymbol{x}}_{opt} = -\frac{1}{2}\boldsymbol{AA}^{T}\lambda=\boldsymbol{b}

\lambda=-2(\boldsymbol{AA}^{T})^{-1}\boldsymbol{b}

and finally

\hat{\boldsymbol{x}}_{opt}=\boldsymbol{A}^{T} (\boldsymbol{AA}^{T})^{-1} \boldsymbol{b}=\boldsymbol{A}^{+} \boldsymbol{b}

By using this equation, we can now instantly compute an optimal solution of thel_2-optimisation problem. This equation is well known as theMoore-Penrose Pseudoinverse and the problem itself is usually known as Least Square problem, Least Square regression, or Least Square optimisation.

However, even though the solution of Least Square method is easy to compute, it’s not necessary be the best solution. Because of the smooth nature ofl_2-norm itself,  it is hard to find a single, best solution for the problem.

In contrary, the l_1-optimisation can provide much better result than this solution.

l1-optimisation

As usual, the l_1-minimisation problem is formulated as

min \left \| x \right \|_1 subject to Ax = b

Because the nature ofl_1-norm is not smooth as in thel_2-norm case, the solution of this problem is much better and more unique than thel_2-optimisation.

However, even though the problem ofl_1-minimisation has almost the same form as thel_2-minimisation, it’s much harder to solve. Because this problem doesn’t have a smooth function, the trick we used to solvel_2-problem is no longer valid.  The only way left to find its solution is to search for it directly. Searching for the solution means that we have to compute every single possible solution to find the best one from the pool of “infinitely many” possible solutions.

Since there is no easy way to find the solution for this problem mathematically, the usefulness ofl_1-optimisation is very limited for decades. Until recently, the advancement of computer with high computational power allows us to “sweep” through all the solutions. By using many helpful algorithms, namely the Convex Optimisation algorithm such as linear programming, or non-linear programming, etc. it’s now possible to find the best solution to this  question. Many applications that rely onl_1-optimisation, including the Compressive Sensing, are now possible.

There are many toolboxes  forl_1-optimisation available nowadays.  These toolboxes usually use different approaches and/or algorithms to solve the same question. The example of these toolboxes are l1-magic, SparseLab, ISAL1,

Now that we have discussed many members of norm family, starting froml_0-norm,l_1-norm, andl_2-norm. It’s time to move on to the next one. As we discussed in the very beginning that there can be any l-whatever norm following the same basic definition of norm, it’s going to take a lot of time to talk about all of them. Fortunately, apart froml_0-,l_1– , andl_2-norm, the rest of them usually uncommon and therefore don’t have so many interesting things to look at. So we’re going to look at the extreme case of norm which is a l_{\infty}-norm (l-infinity norm).

l-infinity norm

As always, the definition forl_{\infty}-norm is

\left \| x \right \|_{\infty} = \sqrt[\infty]{\sum_i x_i^{\infty}}

Now this definition looks tricky again, but actually it is quite strait forward. Consider the vector\boldsymbol{x}, let’s say ifx_j is the highest entry in the vector  \boldsymbol{x}, by the property of the infinity itself, we can say that

x_j^{\infty}\gg x_i^{\infty}\forall i \neq j

 then

\sum_i x_i^{\infty} = x_j^{\infty}

then

\left \| x \right \|_{\infty} = \sqrt[\infty]{\sum_i x_i^{\infty}} = \sqrt[\infty]{x_j^{\infty}} = \left | x_j \right |

Now we can simply say that thel_{\infty}-norm is

\left \| x \right \|_{\infty} = max(\left | x_i \right |)

that is the maximum entries’ magnitude of that vector. That surely demystified the meaning ofl_{\infty}-norm

Now we have discussed the whole family of norm froml_0 tol_{\infty}, I hope that this discussion would help understanding the meaning of norm, its mathematical properties, and its real-world implication.

Reference and further reading:

Mathematical Norm – wikipedia 

Mathematical Norm – MathWorld

Michael Elad – “Sparse and Redundant Representations : From Theory to Applications in Signal and Image Processing” , Springer, 2010.

 Linear Programming – MathWorld

Compressive Sensing – Rice University

0 0
原创粉丝点击