Simple way to check if an image bitmap is blur

来源:互联网 发布:餐饮软件考试指定 编辑:程序博客网 时间:2024/05/16 06:44

I am looking for a "very" simple way to check if an image bitmap is blur. I do not need accurate and complicate algorithm which involves fft, wavelet, etc. Just a very simple idea even if it is not accurate.

I've thought to compute the average euclidian distance between pixel (x,y) and pixel (x+1,y) considering their RGB components and then using a threshold but it works very bad. Any other idea?

shareimprove this question
 
 
You can perhaps use the average variance of a sliding window to get a rough idea of how much high frequency content you have. Convert to greyscale first. The answers tothis question have many more options. – Roger RowlandJan 14 '14 at 7:16

3 Answers

activeoldest votes
up vote5 down vote accepted

Don't calculate the average differences between adjacent pixels.

Even when a photograph is perfectly in focus, it can still contain large areas of uniform colour, like the sky for example. These will push down the average difference and mask the details you're interested in. What you really want to find is themaximum difference value.

Also, to speed things up, I wouldn't bother checking every pixel in the image. You should get reasonable results by checking along a grid of horizontal and vertical lines spaced, say, 10 pixels apart.

Here are the results of some tests with PHP's GD graphics functions using an image from Wikimedia Commons (Bokeh_Ipomea.jpg). TheSharpness values are simply the maximum pixel difference values as a percentage of 255 (I only looked in the green channel; you should probably convert to greyscale first). The numbers underneath show how long it took to process the image.

close-up of Ipomea flower, sharpness calculated as 71.0%

same image with slight blurring, sharpness is reduced to 36.1%

same image with severe blurring; sharpness is now 17.6%

If you want them, here are the source images I used:

  • original
  • slightly blurred
  • blurred

Update:

There's a problem with this algorithm in that it relies on the image having a fairly high level of contrast as well as sharp focused edges. It can be improved by finding the maximum pixel difference (maxdiff), and finding the overall range of pixel values in a small area centred on this location (range). The sharpness is then calculated as follows:

sharpness = (maxdiff / (offset + range)) * (1.0 +offset / 255) * 100%

where offset is a parameter that reduces the effects of very small edges so that background noise does not affect the results significantly. (I used a value of 15.)

This produces fairly good results. Anything with a sharpness of less than 40% is probably out of focus. Here's are some examples (the locations of the maximum pixel difference and the 9×9 local search areas are also shown for reference):

"Pure Linen" by mystuart @ Flickr(source)

"Blurred Buty" by Ilya @ Flickr(source)

"Blurry Downtown" by Andy Arthur @ Flickr(source)

"blurry volcanic mound" by matt Dombrowski @ Flickr(source)

The results still aren't perfect, though. Subjects that are inherently blurry will always result in a low sharpness value:

"Clouds and sky" by William Warby @ Flickr(source)

Bokeh effects can produce sharp edges from point sources of light, even when they are completely out of focus:

"The Side" by HD41117 @ Flickr(source)

You commented that you want to be able to reject user-submitted photos that are out of focus. Since this technique isn't perfect, I would suggest that you instead notify the user if an image appears blurry instead of rejecting it altogether.

shareimprove this answer
 
 
Thanks @squeamish-ossifrage for your reply, but could you please provide me additional details? I would like to implement your method but it is not clear to me how it works exactly. Do you compute the color distance between a pixel and its neighbour for each pixel and then select the maxium? Is it correct? If so, suppose your image contains two adjacent pixels one white and one black. This will give the maximum distance of 255 and a sharpness of 100%. Is it correct? – user2923045Jan 15 '14 at 8:11
 
Yes, that's right.Here's the PHP source code – squeamish ossifrageJan 15 '14 at 8:16
 
This surely works and is simple as required. Only concern is time. Choose a sparse grid for speedy results. – sepdekJan 15 '14 at 15:52
 
It works fine but unfortunately it is useful when you have several instances of the same picture and you want to determine which of them is the best (in terms of sharpness). My problem is different. In fact, I have a single instance of a picture and I want to determine its quality (in terms of sharpness) and reject it if it is blur... – user2923045Jan 17 '14 at 8:27
 
@user2923045 I've updated my answer; please take a look. – squeamish ossifrage Jan 20 '14 at 23:53
up vote2 down vote

I suppose that, philosophically speaking, all natural images are blurry...How blurry and to which amount, is something that depends upon your application. Broadly speaking, the blurriness or sharpness of images can be measured in various ways. As a first easy attempt I would check for the energy of the image, defined as the normalised summation of the squared pixel values:

     1     2E = --- Σ I,     where I the image and N the number of pixels (defined for grayscale)     N

First you may apply a Laplacian of Gaussian (LoG) filter to detect the "energetic" areas of the image and then check the energy. The blurry image should show considerably lower energy.

See an example in MATLAB using a typical grayscale lena image:
This is the original imageThis is the original imageThis is the blurry image, blurred with gaussian noiseThis is the blurry image, blurred with gaussian noiseThis is the LoG image of the originalThis is the LoG image of the originalAnd this is the LoG image of the blurry oneThis is the LoG image of the blurry image

If you just compute the energy of the two LoG images you get:

E  = 1265       E  = 88 or              bl

which is a huge amount of difference...
Then you just have to select a threshold to judge which amount of energy is good for your application...

shareimprove this answer
 
 
The original image has a lot of granular noise that was eliminated by your gaussian filter. A real photo would still contain the same amount of noise, even if it was incorrectly focused. – squeamish ossifrageJan 15 '14 at 10:32
 
@squeamishossifrage I strongly disagree with your opinion. To my understanding, there is no way you have a blurry image (or incorrectly focused) and still be "granular" enough to get high energy... – sepdekJan 15 '14 at 14:47
 
Let me try to explain, then. Look at the background of your original image, in the region above the hat. The image is clearly noisy here, even though it is already out of focus. This reflects the granularity of the film on which this photograph was taken. On the other hand, photographs taken on digital cameras are affected bydark current and thermal noise. In both cases, this noise will have a significant effect on the mean energy of the image. – squeamish ossifrage Jan 15 '14 at 16:32
 
@squeamishossifrage I can understand this but my argument was that this "granularity" will not contribute significantly to an increase in energy. I have tested the method I proposed with various images, also the images in your example and my approach still holds. – sepdekJan 15 '14 at 19:44
 
@squeamishossifrage ...continuing... I have also implemented your method in MATLAB which is much easier and tried it using the whole image and not just samples on a grid. The results obtained by both approaches are consistent. But, clearly, my approach will be slower than yours, which uses simpler operations and surely could become even faster if your grid becomes sparser (enough to still represent the image). – sepdekJan 15 '14 at 19:44
up vote1 down vote

calculate the average L1-distance of adjacent pixels:

N1=1/(2*N_pixel) * sum( abs(p(x,y)-p(x-1,y)) + abs(p(x,y)-p(x,y-1)) )

then the average L2 distance:

N2= 1/(2*N_pixel) * sum( (p(x,y)-p(x-1,y))^2 + (p(x,y)-p(x,y-1))^2  )

then the ratio N2 / (N1*N1) is a measure of blurriness. This is for grayscale images, for color you do this for each channel separately.

shareimprove this answer
 
 
Thanks @pentadecagon. Do you suggest to convert the image in grayscale or repeat the above task for each channel and then aggregate the results in someway? – user2923045Jan 14 '14 at 8:06
0 0
原创粉丝点击