Frei-Chen edge detector

Frei-Chen edge detector

In this article, I would like to present you an edge detection algorithm that shares similar performance characteristics like the well-known Sobel operator but provides slightly better edge detection and can be seamlessly extended with little to no performance overhead to also detect corners alongside with edges. The algorithm works on a 3×3 texel footprint similarly like the Sobel filter but applies a total of nine convolution masks over the image that can be used for either edge or corner detection. The article presents the mathematical background that is needed to implement the edge detector and provides a reference implementation written in C/C++ using OpenGL that showcases both the Frei-Chen and the Sobel edge detection filter applied to the same image.

I met with the algorithm during my computer graphics studies when one of my homeworks was to implement the Frei-Chen edge detector. As I already mentioned it in an earlier post, I am willing to provide source code for more basic graphics algorithms after seeing the success of my former post about the Gaussian blur filter. This one is a very similarly basic article, taking in consideration it shows only how to apply a particular convolution filter based algorithm on a still image, while the possibilities this edge detection algorithm brings is a more complex topic that is out of the scope of this article.

As the provided reference implementation also showcases applying the Sobel operator on an image, I would like to present that first and then continue with the presentation of the Frei-Chen masking set. Those who are already well familiar with edge detection and the Sobel operator can skip the following two sections.

Edge detection

Before getting deep into how to implement edge detectors, let’s first talk about what is an edge detector and why we need it.

In general, edge detection is one of the most fundamental image processing tools, particularly used in the areas of feature detection and feature extraction. The aim of the technique is to identify points of a digital image at which the intensity changes sharply. The reason of these intensity changes can be either discontinuities in depth, surface orientation, lighting condition changes and many other factors. In the ideal case, the result of applying an edge detector to an image leads us to a set of connected lines or curves that indicate the boundaries of objects.

Not going that far, what an edge detector gives us from the very beginning is a gray-scale image where each pixel intensity tries to approximate the likelihood of whether that pixel belongs to an object boundary. How well a particular algorithm can detect such pixels depends on many factors and usually it is better to try multiple edge detectors in order to choose one that fits most for the particular use case.

After we got this gray-scale image we usually have to define a threshold value that will be used as an acceptance criteria for edge pixels. If the intensity value previously calculated is above this threshold then we accept the pixel as an edge otherwise we don’t. This part is the so called binarization stage. Additionally, subsequent image processing algorithms can be used to further interpret the edge image.

In computer graphics, edge detection is usually used to implement various image decoration algorithms. Maybe the most popular applications of edge detectors nowadays are non-photorealistic rendering (NPR) and screen-space anti-aliasing techniques.

Sobel filter

The Sobel edge detection filter works on a 3×3 texel footprint and applies two convolution masks to the image that are intended to detect horizontal and vertical gradients of the image. The filter weights can be seen in on the figure below:

Sobel masks

These masks are applied to the intensities gathered from the 3×3 footprint of the image and then are accumulated to produce the final gradient value in the following way:

Sobel gradient

The actual algorithm can be seen in the accompanying demo that provides a GLSL based implementation. The algorithm is defined to work on one channel image, however it can be easily extended to be applied either separately on a usual three-channel RGB image or by first calculating a gray-scale value based on the color component values. The former is more computationally intensive but usually provides better results by defining the threshold criteria in a way that a pixel is accepted as boundary point if the gradient value is larger than the threshold for either of the color channels. The reference implementation, however is based on the later approach for the sake of simplicity so for each pixel first an intensity value is calculated simply by taking the length of the vector comprised of the RGB components.

Frei-Chen filter

The Frei-Chen edge detector also works on a 3×3 texel footprint but applies a total of nine convolution masks to the image. Frei-Chen masks are unique masks, which contain all of the basis vectors. This implies that a 3×3 image area is represented with the weighted sum of nine Frei-Chen masks that can be seen below:

Frei-Chen masks

The first four Frei-Chen masks above are used for edges, the next four are used for lines and the last mask is used to compute averages. For edge detection, appropriate masks are chosen and the image is projected onto it. The projection equation is given below:

Frei-Chen equation

When we are using the Frei-Chen masks for edge detection we are searching for the cosine defined above and we use the first four masks as the elements of importance so the first sum above goes from one to four.

The application of a threshold and applying the filter to multi-channel images works exactly the same way like in case of the Sobel filter. Similarly, the reference implementation applies the filter on the image as it would be a single-channel image by first calculating the intensity value for each texel in the same fashion like with the previously presented filter.

Comparison

Based on my experience, the Frei-Chen edge detector looks better than the Sobel filter as it is less sensitive to noise and is able to detect edges that have small gradients and thus are not found by the basic Sobel filter. For a comparison, you can check the figure below:

Comparison of edge detectors

Comparison of edge detectors: original image (left), Sobel filter (middle), Frei-Chen filter (right).

The reason why the Frei-Chen edge detector seems to work better is because its construction includes a normalization factor as well as other factors that are meant to exclude all other features except edges. A normalization factor can be also added to the Sobel filter by having a third mask that is equivalent with the ninth Frei-Chen mask and is used to normalize the gradients. This could help in reducing the number of undetected edges and the amount of noise that arises from the fact that the Sobel filter calculates absolute gradients rather than relative ones.

From performance point of view, the Frei-Chen edge detector is much more heavyweight as it uses nine masks instead of two, however, in practice, the performance difference between the two is much less taking in consideration that both use the same sized texel footprint and the computational performance of today’s GPUs is usually much higher than their texture fetching performance.

Conclusion

We managed to present an alternative algorithm for the Sobel filter in the form of the Frei-Chen edge detector that, even though having little impact on the performance compared to the Sobel operator, provides better edge detection quality. Having little to no difference in the way how the input data has to be organized and how the result is output, the Frei-Chen edge detector can be easily used as a drop-in replacement for implementations that used the Sobel filter before.

Source code and Win32 binary can be acquired in thedownloads section.

I would like to encourage those who read this article to add the Frei-Chen edge detector into their software for making a comparison about whether it yields to better results than the Sobel filter for applications that rely on the output of the edge detection filter. I would be interested how the filter works in real-life computer graphics scenarios.

Thanks in advance and hope you enjoyed the article!




==========================================================

sobel.frag


#version 330 core

uniform sampler2D image;

out vec4 FragmentColor;

uniform mat3 G[2] = mat3[](
 mat3( 1.0, 2.0, 1.0, 0.0, 0.0, 0.0, -1.0, -2.0, -1.0 ),
 mat3( 1.0, 0.0, -1.0, 2.0, 0.0, -2.0, 1.0, 0.0, -1.0 )
);


void main(void)
{
 mat3 I;
 float cnv[2];
 vec3 sample;
 
 /* fetch the 3x3 neighbourhood and use the RGB vector's length as intensity value */
 for (int i=0; i<3; i++)
 for (int j=0; j<3; j++) {
  sample = texelFetch( image, ivec2(gl_FragCoord) + ivec2(i-1,j-1), 0 ).rgb;
  I[i][j] = length(sample);
 }
 
 /* calculate the convolution values for all the masks */
 for (int i=0; i<2; i++) {
  float dp3 = dot(G[i][0], I[0]) + dot(G[i][1], I[1]) + dot(G[i][2], I[2]);
  cnv[i] = dp3 * dp3;
 }

 FragmentColor = vec4(0.5 * sqrt(cnv[0]*cnv[0]+cnv[1]*cnv[1]));
}


==============================================================

frei-chen.frag

#version 330 core

uniform sampler2D image;

out vec4 FragmentColor;

uniform mat3 G[9] = mat3[](
 1.0/(2.0*sqrt(2.0)) * mat3( 1.0, sqrt(2.0), 1.0, 0.0, 0.0, 0.0, -1.0, -sqrt(2.0), -1.0 ),
 1.0/(2.0*sqrt(2.0)) * mat3( 1.0, 0.0, -1.0, sqrt(2.0), 0.0, -sqrt(2.0), 1.0, 0.0, -1.0 ),
 1.0/(2.0*sqrt(2.0)) * mat3( 0.0, -1.0, sqrt(2.0), 1.0, 0.0, -1.0, -sqrt(2.0), 1.0, 0.0 ),
 1.0/(2.0*sqrt(2.0)) * mat3( sqrt(2.0), -1.0, 0.0, -1.0, 0.0, 1.0, 0.0, 1.0, -sqrt(2.0) ),
 1.0/2.0 * mat3( 0.0, 1.0, 0.0, -1.0, 0.0, -1.0, 0.0, 1.0, 0.0 ),
 1.0/2.0 * mat3( -1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, -1.0 ),
 1.0/6.0 * mat3( 1.0, -2.0, 1.0, -2.0, 4.0, -2.0, 1.0, -2.0, 1.0 ),
 1.0/6.0 * mat3( -2.0, 1.0, -2.0, 1.0, 4.0, 1.0, -2.0, 1.0, -2.0 ),
 1.0/3.0 * mat3( 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 )
);


void main(void)
{
 mat3 I;
 float cnv[9];
 vec3 sample;
 
 /* fetch the 3x3 neighbourhood and use the RGB vector's length as intensity value */
 for (int i=0; i<3; i++)
 for (int j=0; j<3; j++) {
  sample = texelFetch( image, ivec2(gl_FragCoord) + ivec2(i-1,j-1), 0 ).rgb;
  I[i][j] = length(sample);
 }
 
 /* calculate the convolution values for all the masks */
 for (int i=0; i<9; i++) {
  float dp3 = dot(G[i][0], I[0]) + dot(G[i][1], I[1]) + dot(G[i][2], I[2]);
  cnv[i] = dp3 * dp3;
 }

 float M = (cnv[0] + cnv[1]) + (cnv[2] + cnv[3]);
 float S = (cnv[4] + cnv[5]) + (cnv[6] + cnv[7]) + (cnv[8] + M);
 
 FragmentColor = vec4(sqrt(M/S));
}