RGB image normalization

来源:互联网 发布:美人鱼票房每天数据 编辑:程序博客网 时间:2024/05/17 06:23

Q: Can someone please tell me what is the use of normalizing an image? i've read that, this is to remove the effect of 

any change in intensitycan i make the intensity of 2 images similar or same? and,how to do normalizing?


A: Indeed you are right, converting an RGB image into normalized RGB removes the effect of any intensity variations. An interesting experiment is to take a photograph of something like a single colour Rose flower, Converting this into normalized RGB converts the rose to an amorphous blob of colour, as all of the detail of the flower is caused by a gentle change in intensity caused by the shadowing of the petals.

 

How to do it is quite simple

 

1) Split your RGB image into three seperate greyscale images representing the red, green, blue colour planes

Image_red = Image_rgb(:,:,1);

Image_green = Image_rgb(:,:,2);

Image_blue = Image_rgb(:,:,3);

 

 

2) For each pixel in the image take the three corresponding components from the red, green, blue matrices - calculate 

the following (Remember to cast into doubles, else you will get zero on your division)

NormalizedRed = r/sqrt(Red^2 + Green^2 + Blue^2);

NormalizedGreen = g/sqrt(Red^2 + Green^2 + Blue^2);

NormalizedBlue = b/sqrt(Red^2 + Green^2 + Blue^2);

 

Apply these transformations to evey pixel in the image.

 

3) Reform a colour image back into UINT8 (you need to additionally scale the image by a factor of sqrt(3) to get fully saturated colour representation). Restack the three normalized planes to form an RGB image, and display. From memory one of the many ways to do this is 

Image_normalizedRGB = cat3(NormalizedRed,NormalizedGreen,NormalizedBlue);

 

4) Things to watch for - when you get a true black pixel Red = Green = Blue = 0, so your transformation equation will try to compute 0/0 and fall over laughing, catch this possibility and set the ratio to (1/sqrt(3)) which is normalized grey. Also look out for noise in the image that sometimes occurs in regions that were very dark in the original image. 

 

For speed you might get away with the approximation. 

 

NormalizedRed` = Red/(Red + Green + Blue); etc

 

Normalizing can provide exactly what you need to compare two images taken under variations of illumination PROVIDING the colour temperature of the light source doesn't change as the illumination output varies. e.g. a Tungsten lamp will generate a cooler colour temperature if driven with a lower voltage, so this might give problems - however variation in daylight caused by clouds, or other shadowing is easily taken out.

 

Code:


image = imread('car.png');

R = double(image(:,:,1));

G = double(image(:,:,2));

B = double(image(:,:,3));

square_value = (R.^2 + G.^2 + B.^2).^0.5;

R1 = R./square_value;

G1 = G./square_value;

B1 = B./square_value;

image2(:,:,1) = R1;

image2(:,:,2) = G1;

image2(:,:,3) = B1;

%imshow(image2,[]);

%transform the image to uint8 and see the output

image3(:,:,1) = R1.*255;

image3(:,:,2) = G1.*255;

image3(:,:,3) = B1.*255;

 

%figure;imshow(uint8(image3));

原创粉丝点击