Connected Components Labeling

来源:互联网 发布:linux sed 删除指定行 编辑:程序博客网 时间:2024/05/18 03:40

http://homepages.inf.ed.ac.uk/rbf/HIPR2/label.htm


Connected Components Labeling


Common Names: Connected Components Labeling

Brief Description

Connected components labeling scans an image and groups its pixels into components based on pixel connectivity, i.e. all pixels in a connected component share similar pixel intensity values and are in some way connected with each other. Once all groups have been determined, each pixel is labeled with a graylevel or a color (color labeling) according to the component it was assigned to.

Extracting and labeling of various disjoint and connected components in an image is central to many automated image analysis applications.

How It Works

Connected component labeling works by scanning an image, pixel-by-pixel (from top to bottom and left to right) in order to identify connected pixel regions, i.e. regions of adjacent pixels which share the same set of intensity values V. (For a binary image V={1}; however, in a graylevel image V will take on a range of values, for example: V={51, 52, 53, ..., 77, 78, 79, 80}.)

Connected component labeling works on binary or graylevel images and different measures of connectivity are possible. However, for the following we assume binary input images and 8-connectivity. The connected components labeling operator scans the image by moving along a row until it comes to a point p (where p denotes the pixel to be labeled at any stage in the scanning process) for which V={1}. When this is true, it examines the four neighbors of pwhich have already been encountered in the scan (i.e. the neighbors (i) to the left of p, (ii) above it, and (iii and iv) the two upper diagonal terms). Based on this information, the labeling of p occurs as follows:

  • If all four neighbors are 0, assign a new label to p, else

  • if only one neighbor has V={1}, assign its label to p, else

  • if more than one of the neighbors have V={1}, assign one of the labels to p and make a note of the equivalences.

After completing the scan, the equivalent label pairs are sorted into equivalence classes and a unique label is assigned to each class. As a final step, a second scan is made through the image, during which each label is replaced by the label assigned to its equivalence classes. For display, the labels might be different graylevels or colors.

Guidelines for Use

To illustrate connected components labeling, we start with a simple binary image containing some distinct artificial objects

art8

After scanning this image and labeling the distinct pixels classes with a different grayvalue, we obtain the labeled output image

art8lab1

Note that this image was scaled, since the initial grayvalues (1 - 8) would all appear black on the screen. However, the pixels initially assigned to the lower classes (1 and 2) are still indiscernible from the background. If we assign a distinct color to each graylevel we obtain

art8lab2

The full utility of connected components labeling can be realized in an image analysis scenario wherein images are pre-processed via some segmentation (e.g. thresholding) or classification scheme. One application is to use connected components labeling to count the objects in an image. For example, in the above simple scene the 8 objects yield 8 different classes.

If we want to count the objects in a real world scene like

clc3

we first have to threshold the image in order to produce a binary input image (the implementation being used only takes binary images). Setting all values above a value of 150 to zero yields

clc3thr1

The white dots correspond to the black, dead cells in the original image. The connected components of this binary image can be seen in

clc3lab1

The corresponding colormap shows that the highest value is 163i.e. the output contains 163 connected components. In order to better see the result, we now would like to assign a color to each component. The problem here is that we cannot find 163 colors where each of them is different enough from all others to be distinguished by the human eye. Two possible ways to assign the colors are as follows:

  • We only use a few colors (e.g. 8) which are clearly different from each other and assign each graylevel of the connected component image to one of these colors. The result can be seen in
    clc3lab2

    We can now easily distinguish two different components, provided that they were not assign the same color. However, we lose a lot of information because the (in this case) 163 graylevels are reduced to 8 different colors.

  • We can assign a different color to each grayvalue, many of them being quite similar. A typical result for the above image would be
    clc3lab3

    Although, we sometimes cannot tell the border between two components when they are very close to each other, we do not lose any information.

More sophisticated techniques combine both methods. They make sure that two nearby components always have distinct colors by taking into account the colors of the neighbors of the component which is going to be assigned a color.

If we compare the above color-labeled images with the original, we can see that the number of components is quite close to the number of dead cells in the image. However, we obtain a small difference since some cells merge together into one component or dead cells are suppressed by the threshold.

We encounter greater problems when trying to count the number of turkeys in

tur1gry1

Labeling the thresholded image

tur1thr1

yields

tur1lab1

as a graylevel or

tur1lab2

as a color labeled version. Although we managed to assign one connected component to each turkey, the number of components (196) does not correspond to the number of turkeys.

The two last examples showed that the connected component labeling is the easy part of the automated analysis process, whereas the major task is to obtain a good binary image which separates the objects from the background.

Finally, we consider the problem of labeling data output from a classification processes. We can classify multi-spectral images (e.g. a two-band image consisting of

avs2

(visible range) and

air2

(infra-red range)) in order to find k groupings of the data based on the pixel intensities clusters. This result is shown in the image

avi2cls1

where the multi-spectral image was classified into two groups. If we now apply connected components labeling, connected geographic regions which belong to the same intensity classes can be labeled. The result contains 49 different components, most of them being only a few pixels large. The color labeled version can be seen in

avi2lab1

One could now use this image to further investigate the regions, e.g. if some components changed their size compared to a reference image or if other regions merged together.

Common Variants

A collection of morphological operators exists for extracting connected components and labeling them in various ways. A simple method for extracting connected components of an image combines dilation and the mathematical intersection operation. The former identifies pixels which are part of a continuous region sharing a common set of intensity values V={1} and the latter eliminates dilations centered on pixels with V={0}. The structuring element used defines the desired connectivity.

More sophisticated variants of this include a set of geodesic functions for measuring the exact shape of distinct objects in an image. These operators are based on the notion of geodesic distance d which is defined as the shortestdistance between two points located within an image object such that the entire path between the points is included in the object. One way to obtain this information is to apply a series of dilations of size 1. (Distance measuring operators are described in fuller detail elsewhere.)

For example, consider the image

tol1crp1

which shows a triangular block. Applying a geodesic operator to the image produces a labeled image

bld1lab1

wherein the graylevel intensity labeling across the surface of the block encodes geodesic distance, i.e. light pixels represent larger distances.

Interactive Experimentation

You can interactively experiment with this operator by clicking here.

Exercises

  1. How would the scanning algorithm described above label an object containing a hole? How would the morphological approach? Investigate how your implementation handles the image
    cir1

  2. Apply connected components labeling to an image counting problem. Starting from
    pen1

    produce a suitable binary image (i.e. threshold the image) and then apply connected components labeling with the aim of obtaining a distinct label for each penguin. (Note, this may require some experimentation with threshold values.)

  3. The remote sensing example given in the test used a rather convoluted set of operations (e.g., classification, thresholding and then labeling). See if you can obtain similar results by simply thresholding one of the original images, such as
    avs1

    and/or

    air1

    and then applying labeling directly.

  4. Classifying the above two-band satellite image of Africa using the classification parameter k=4 yields
    avi2cls5

    Use labeling to identify each connected component in this image. If your implementation of the operator does not support graylevel images use thresholding to produce four binary images, each containing one of the four classes. Then apply connected component labeling to each of the binary images.

  5. Try using thresholding and connected components analysis to segment the image
    aer1

    into urban and rural areas. You might investigate thresholding within a particular color band(s) to create two binary files containing a description of (i) rural areas, i.e. fields, trees, hills, etc. around the image perimeter and (ii) urban areas, i.e. roads, houses, etc. in the image interior.

References

T. Avery and G. Berlin Fundamentals of Remote Sensing and Airphoto Interpretation, Maxwell Macmillan International, 1985, Chap. 15.

D. Ballard and C. Brown Computer Vision, Prentice-Hall, 1982, Chap. 2.

E. Davies Machine Vision: Theory, Algorithms and Practicalities, Academic Press, 1990, Chap. 6.

R. Gonzalez and R. Woods Digital Image Processing, Addison-Wesley Publishing Company, 1992, Chap. 2.

B. Horn Robot Vision, MIT Press, 1986, Chap. 4.

D. Vernon Machine Vision, Prentice-Hall, 1991, pp 34 - 36.