A Short introduction to descriptors,附带SIFT描述子的基本原理

来源:互联网 发布:chart.js zoom y axis 编辑:程序博客网 时间:2024/06/04 18:50

转载地址: https://gilscvblog.com/2013/08/18/a-short-introduction-to-descriptors/

Gil's CV blog

A Short introduction to descriptors

Since the next few posts will talk about binary descriptors, I thought it would be a good idea to post a short introduction to the subject ofpatch descriptors. The following post will talk about the motivation to patch descriptors, the common usage and highlight theHistogram of Oriented Gradients (HOG) based descriptors.

I think the best way to start is to consider one application of patch descriptors and to explain the common pipeline in their usage. Consider, for example, the application of image alignment: we would like to align two images of the same scene taken at slightly different viewpoints. One way of doing so is by applying the following steps:

  1. Compute distinctive keypoints in both images (for example, corners).

  2. Compare the keypoints between the two images to find matches.

  3. Use the matches to find a general mapping between the images (for example, a homography).

  4. Apply the mapping on the first image to align it to the second image.

Using descriptors to compare patches

Using descriptors to compare patches

Let’s focus on the second step. Given a small patch around a keypointtaken from the first image, and a second small patch around a keypointtaken from the second image,how can we determine if it’s indeed the same point?

In general, the problem we are focusing on is that ofcomparing two image patches and measuring their similarity. Given two such patches, how can we determine their similarity? We can measure the pixel to pixel similarity by measuring their Euclidean distance, but that measure is very sensitive to noise, rotation, translation and illumination (光照)changes. In most applications we would like to be robust to such change. For example, in the image alignment application, we would like to be robust to small view-point changes – that means robustness to rotation and to translation.

This is where patch descriptors come in handy. A descriptor is some function that is applied on the patch to describe it in a way that is invariant to all the image changesthat are suitable to our application (e.g. rotation, illumination, noise etc.). A descriptor is “built-in”with a distance function to determine the similarity, or distance, of two computed descriptors. So to compare two image patches, we’ll compute their descriptors and measure their similarity by measuring the descriptor similarity, which in turn is done by computing their descriptor distance. The following diagram illustrates this process:

 The common pipeline for using patch descriptors is:

  1. Detect keypoints in image (distinctive points such as corners).

  2. Describe each region around a keypoint as a feature vector, using a descriptor.

  3. Use the descriptors in the application (for comparing descriptors – use the descriptor distance or similarity) function

The following diagram illustrates this process:

Extracting descriptors from detected keypoints

Extracting descriptors from detected keypoints

HOG descriptors

So, now that we understand how descriptors are used, let’s give an example to one family of descriptors. We will consider the family ofHistograms of Oriented Gradients (HOG) based descriptors. Notable examples of this family are SIFT[1], SURF[2] and GLOH[3]. Of the members of this family,we will describe it’s most famous member – the SIFT descriptor.

SIFT was presented in 1999 by David Lowe and includes both a keypoint detector and descriptor. SIFT is computed as follows:

  1. First, detect keypoints using the SIFT detector, which also detects scale and orientation of the keypoint.

  2. Next, for a given keypoint, warp the region around it to canonical orientation and scale and resize the region to 16X16 pixels.

SIFT  - warping the region around the keypoint

SIFT – warping the region around the keypoint

  1. Compute the gradients for each pixels (orientation and magnitude).

  1. Divide the pixels into 16, 4X4 pixels squares.

SIFT  - dividing to squares and calculating orientation

SIFT – dividing to squares and calculating orientation

  1. For each square, compute gradient direction histogram over 8 directions

SIFT - calculating histograms of gradient orientation

SIFT – calculating histograms of gradient orientation

  1. concatenate(连接) the histograms to obtain a 128 (16*8) dimensional feature vector:

SIFT - concatenating histograms from different squares

SIFT – concatenating histograms from different squares

SIFT descriptor illustration:

SIFT descriptors illustration

SIFT descriptors illustration

SIFT is invariant to illumination changes, as gradients are invariant to light intensity shift. It’s also somewhat invariant to rotation, as histograms do not contain any geometric information.

Other members of this family, for example SURF and GLOH are also based on taking histograms of gradients orientation. SIFT and SURF are patented, so they can’t be freely used in applications.

So, that’s it for now:) In the next few posts we will talk about binary descriptors which provide an alternative as they are light, fast and not patented.

Gil.

References:

[1] Lowe, David G. “Object recognition from local scale-invariant features.”Computer vision, 1999. The proceedings of the seventh IEEE international conference on. Vol. 2. Ieee, 1999.‏

[2] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “Surf: Speeded up robust features.” Computer Vision–ECCV 2006. Springer Berlin Heidelberg, 2006. 404-417.‏

[3] Mikolajczyk, Krystian, and Cordelia Schmid. “A performance evaluation of local descriptors.” Pattern Analysis and Machine Intelligence, IEEE Transactions on27.10 (2005): 1615-1630.‏


原创粉丝点击