iOS GPUImage图片处理性能比对

来源:互联网 发布:java开源网盘系统源码 编辑:程序博客网 时间:2024/04/28 17:59

demo:http://download.csdn.net/detail/xoxo_x/9704542

待处理图片:(12K)

这里写图片描述

处理之后的图片:

这里写图片描述

  • CPU为图片添加滤镜代码@

  • CoreImage为图片添加滤镜代码@

  • GPUImage为图片添加滤镜代码@

分别使用CPU、CoreImage、GPUImage 对同一张照片进行相同效果的滤镜处理。

处理时间:

第一次:

这里写图片描述
第二次:

这里写图片描述

从图中,我们了解到目前有:如下结果

处理效率(性能):CPU >= GPUImage 》CoreImage

当然这是小图片的处理结果,图片大小为12k

下面换一张大点的图片

待处理图片:(970 KB)

这里写图片描述

处理之后的图片:

这里写图片描述

时间:

这里写图片描述

这里写图片描述

结论如上:处理效率(性能):CPU >= GPUImage 》CoreImage

当cpu进行逻辑处理的时候,你不能要求他去做图像运算,否则会影响计算机的性能,所以这时候使用GPU来进行图形运算,会有好一点的效果,基本上运算时间持平,GPU是为图像处理而专门存在的,GPUImage,基于opengl,opengl是基于Gpu的,因此


测试和写入文件代码:

    UIImage *inputImage = [UIImage imageNamed:@"Lambeau.jpg"];    UIImage *imageFilteredUsingCPURoutine = [self imageProcessedOnCPU:inputImage];    [self writeImage:imageFilteredUsingCPURoutine toFile:@"Lambeau-CPUFiltered.png"];    // Pulling creating the Core Image context out of the benchmarking area, because it can only be created once and reused    //创建 Core Image 上下文 ,    if (coreImageContext == nil)    {        coreImageContext = [CIContext contextWithOptions:nil];    }    UIImage *imageFilteredUsingCoreImageRoutine = [self imageProcessedUsingCoreImage:inputImage];    [self writeImage:imageFilteredUsingCoreImageRoutine toFile:@"Lambeau-CoreImageFiltered.png"];    UIImage *imageFilteredUsingGPUImageRoutine = [self imageProcessedUsingGPUImage:inputImage];    [self writeImage:imageFilteredUsingGPUImageRoutine toFile:@"Lambeau-GPUImageFiltered.png"];    [self.tableView reloadData];- (void)writeImage:(UIImage *)imageToWrite toFile:(NSString *)fileName;{    if (imageToWrite == nil)    {        return;    }    NSData *dataForPNGFile = UIImagePNGRepresentation(imageToWrite);    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);    NSString *documentsDirectory = [paths objectAtIndex:0];    NSLog(@"documentsDirectory ,%@",documentsDirectory);    NSError *error = nil;    if (![dataForPNGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:fileName] options:NSAtomicWrite error:&error])    {        return;    }}
CPU处理代码
- (UIImage *)imageProcessedOnCPU:(UIImage *)imageToProcess;{    // Drawn from Rahul Vyas' answer on Stack Overflow at http://stackoverflow.com/a/4211729/19679    CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();    CGImageRef cgImage = [imageToProcess CGImage];    CGImageRetain(cgImage);    CGDataProviderRef provider = CGImageGetDataProvider(cgImage);    CFDataRef bitmapData = CGDataProviderCopyData(provider);    UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);     CGImageRelease(cgImage);    int width = imageToProcess.size.width;    int height = imageToProcess.size.height;    NSInteger myDataLength = width * height * 4;    for (int i = 0; i < myDataLength; i+=4)    {        UInt8 r_pixel = data[i];        UInt8 g_pixel = data[i+1];        UInt8 b_pixel = data[i+2];        int outputRed = (r_pixel * .393) + (g_pixel *.769) + (b_pixel * .189);        int outputGreen = (r_pixel * .349) + (g_pixel *.686) + (b_pixel * .168);        int outputBlue = (r_pixel * .272) + (g_pixel *.534) + (b_pixel * .131);        if(outputRed>255)outputRed=255;        if(outputGreen>255)outputGreen=255;        if(outputBlue>255)outputBlue=255;        data[i] = outputRed;        data[i+1] = outputGreen;        data[i+2] = outputBlue;    }    CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, data, myDataLength, NULL);    int bitsPerComponent = 8;    int bitsPerPixel = 32;    int bytesPerRow = 4 * width;    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();    CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;    CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;    CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider2, NULL, NO, renderingIntent);    CGColorSpaceRelease(colorSpaceRef);    CGDataProviderRelease(provider2);    CFRelease(bitmapData);    UIImage *sepiaImage = [UIImage imageWithCGImage:imageRef];    CGImageRelease(imageRef);    elapsedTime = CFAbsoluteTimeGetCurrent() - startTime;    processingTimeForCPURoutine = elapsedTime * 1000.0;    return sepiaImage;}
CoreImage处理代码
- (UIImage *)imageProcessedUsingCoreImage:(UIImage *)imageToProcess;{    /*    NSArray *filterNames = [CIFilter filterNamesInCategory:kCICategoryBuiltIn];    NSLog(@"Built in filters");    for (NSString *currentFilterName in filterNames)    {        NSLog(@"%@", currentFilterName);    }    */    CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();    CIImage *inputImage = [[CIImage alloc] initWithCGImage:imageToProcess.CGImage];    CIFilter *sepiaTone = [CIFilter filterWithName:@"CISepiaTone"                                     keysAndValues: kCIInputImageKey, inputImage,                            @"inputIntensity", [NSNumber numberWithFloat:1.0], nil];    CIImage *result = [sepiaTone outputImage];//    UIImage *resultImage = [UIImage imageWithCIImage:result]; // This gives a nil image, because it doesn't render, unless I'm doing something wrong    CGImageRef resultRef = [coreImageContext createCGImage:result fromRect:CGRectMake(0, 0, imageToProcess.size.width, imageToProcess.size.height)];    UIImage *resultImage = [UIImage imageWithCGImage:resultRef];    CGImageRelease(resultRef);    elapsedTime = CFAbsoluteTimeGetCurrent() - startTime;    processingTimeForCoreImageRoutine = elapsedTime * 1000.0;    return resultImage;}
GPUImage处理代码
- (UIImage *)imageProcessedUsingGPUImage:(UIImage *)imageToProcess;{    CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();    GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:imageToProcess];    GPUImageSepiaFilter *stillImageFilter = [[GPUImageSepiaFilter alloc] init];    [stillImageSource addTarget:stillImageFilter];    [stillImageFilter useNextFrameForImageCapture];    [stillImageSource processImage];    UIImage *currentFilteredVideoFrame = [stillImageFilter imageFromCurrentFramebuffer];    elapsedTime = CFAbsoluteTimeGetCurrent() - startTime;    processingTimeForGPUImageRoutine = elapsedTime * 1000.0;    return currentFilteredVideoFrame;}

需要导入头文件

#import "ImageFilteringBenchmarkController.h"#import "GPUImage.h"

需要的本地库:
这里写图片描述

1 0