局部高斯模糊实时实现

来源:互联网 发布:360浏览器有mac版的吗 编辑:程序博客网 时间:2024/06/03 19:40

需要实现上图中的效果,在每个cell的底部,需要进行高斯模糊。

实现的大致思路:

1、将cell中的原图片进行裁剪,留下需要进行高斯模糊的那部分的图片;

2、对裁剪后的图片进行高斯模糊处理;

3、将处理好后的图片填充至cell底部的View中。


实现以上步骤中主要涉及到两个方法,一是裁剪图片,二是高斯模糊处理:

1、图片的裁剪:

/** *  图片裁剪 * *  @param theImage 被裁剪的图片 * *  @return 已裁剪好的图片 */- (UIImage *)imageFromView:(UIImage *)theImage{//    CGSize sz = [theImage size];    UIGraphicsBeginImageContextWithOptions(CGSizeMake(320, 30), NO, 0);    [theImage drawAtPoint:CGPointMake(0, -110)];    UIImage *im = UIGraphicsGetImageFromCurrentImageContext();    UIGraphicsEndImageContext();        return im;}
2、高斯模糊处理:

当时在进行高斯模糊处理,出现一个问题,便是用下面的这个方法(1)实现应用会出现卡顿的现象:

(1)

- (UIImage*) blur:(UIImage*)theImage{    // create our blurred image    CIContext *context = [CIContext contextWithOptions:nil];    CIImage *inputImage = [CIImage imageWithCGImage:theImage.CGImage];        CIFilter *affineClampFilter = [CIFilter filterWithName:@"CIAffineClamp"];    CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);    [affineClampFilter setValue:inputImage forKey:kCIInputImageKey];    [affineClampFilter setValue:[NSValue valueWithBytes:&xform                                               objCType:@encode(CGAffineTransform)]                         forKey:@"inputTransform"];        CIImage *extendedImage = [affineClampFilter valueForKey:kCIOutputImageKey];        // setting up Gaussian Blur (could use one of many filters offered by Core Image)    CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];    [blurFilter setValue:extendedImage forKey:kCIInputImageKey];    [blurFilter setValue:[NSNumber numberWithFloat:5.0f] forKey:@"inputRadius"];    CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];        // CIGaussianBlur has a tendency to shrink the image a little,    // this ensures it matches up exactly to the bounds of our original image        NSLog(@"(%f,%f)",theImage.size.width,theImage.size.height);        //[inputImage extent]    CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];        UIImage *returnImage = [UIImage imageWithCGImage:cgImage];    //create a UIImage for this function to "return" so that ARC can manage the memory of the blur...    //ARC can't manage CGImageRefs so we need to release it before this function "returns" and ends.    CGImageRelease(cgImage);//release CGImageRef because ARC doesn't manage this on its own.        NSLog(@"=====%f",returnImage.size.height);        return returnImage;}
尽管这种方法让我实现了想要的效果,但是会卡,这也是不行的。于是在网上各种寻求大神的解决方法,便有了下面的这种高斯模糊算法:

(2)

/** *  高效的高斯模糊处理-----Accelerate的模糊方法 */- (UIImage *)accelerateBlurWithImage:(UIImage *)image{    NSInteger boxSize = (NSInteger)(10 * 5);    boxSize = boxSize - (boxSize % 2) + 1;        CGImageRef img = image.CGImage;        vImage_Buffer inBuffer, outBuffer, rgbOutBuffer;    vImage_Error error;        void *pixelBuffer, *convertBuffer;        CGDataProviderRef inProvider = CGImageGetDataProvider(img);    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);        convertBuffer = malloc( CGImageGetBytesPerRow(img) * CGImageGetHeight(img) );    rgbOutBuffer.width = CGImageGetWidth(img);    rgbOutBuffer.height = CGImageGetHeight(img);    rgbOutBuffer.rowBytes = CGImageGetBytesPerRow(img);    rgbOutBuffer.data = convertBuffer;        inBuffer.width = CGImageGetWidth(img);    inBuffer.height = CGImageGetHeight(img);    inBuffer.rowBytes = CGImageGetBytesPerRow(img);    inBuffer.data = (void *)CFDataGetBytePtr(inBitmapData);        pixelBuffer = malloc( CGImageGetBytesPerRow(img) * CGImageGetHeight(img) );        if (pixelBuffer == NULL) {        NSLog(@"No pixelbuffer");    }        outBuffer.data = pixelBuffer;    outBuffer.width = CGImageGetWidth(img);    outBuffer.height = CGImageGetHeight(img);    outBuffer.rowBytes = CGImageGetBytesPerRow(img);        void *rgbConvertBuffer = malloc( CGImageGetBytesPerRow(img) * CGImageGetHeight(img) );    vImage_Buffer outRGBBuffer;    outRGBBuffer.width = CGImageGetWidth(img);    outRGBBuffer.height = CGImageGetHeight(img);    outRGBBuffer.rowBytes = 3;    outRGBBuffer.data = rgbConvertBuffer;        error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);        if (error) {        NSLog(@"error from convolution %ld", error);    }    const uint8_t mask[] = {2, 1, 0, 3};        vImagePermuteChannels_ARGB8888(&outBuffer, &rgbOutBuffer, mask, kvImageNoFlags);        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();    CGContextRef ctx = CGBitmapContextCreate(rgbOutBuffer.data,                                             rgbOutBuffer.width,                                             rgbOutBuffer.height,                                             8,                                             rgbOutBuffer.rowBytes,                                             colorSpace,                                             kCGImageAlphaNoneSkipLast);    CGImageRef imageRef = CGBitmapContextCreateImage(ctx);    UIImage *returnImage = [UIImage imageWithCGImage:imageRef];        //clean up    CGContextRelease(ctx);        free(pixelBuffer);    free(convertBuffer);    free(rgbConvertBuffer);    CFRelease(inBitmapData);        CGColorSpaceRelease(colorSpace);    CGImageRelease(imageRef);        return returnImage;}

另外,对于关心使用高斯模糊会对系统造成多大的开销的问题,可以看下知乎上《iOS 7 的实时毛玻璃模糊 (live blur) 效果渲染需要多大的系统开销》的这篇文章,在回答中的“

秦道平,程序员”这位大神讲解的非常详细。


1 0
原创粉丝点击