GPUImage 滤波算法详解
来源:互联网 发布:ubuntu 输入 反应慢 编辑:程序博客网 时间:2024/06/06 02:58
#import "GPUImageFilter.h"#import "GPUImagePicture.h"#import <AVFoundation/AVFoundation.h>//GPUImage 源码详解//顶点着色器//vertex.shader//顶点着色器是一个可编程的处理单元,执行顶点变换、纹理坐标变换、光照、材质等顶点的相关操作,每顶点执行一次。替代了传统渲染管线中顶点变换、光照以及纹理坐标的处理,开发人员可以根据自己的需求自行开发,大大增加了程序的灵活性。//顶点着色器主要是传入相应的Attribute变量、Uniforms变量、采样器以及临时变量,经过顶点着色器后生成Varying变量。//vec2 包含了2个浮点数的向量//vec3 包含了3个浮点数的向量//vec4 包含了4个浮点数的向量//(1)attribute变量(属性变量)只能用于顶点着色器中,不能用于片元着色器。一般用该变量来表示一些顶点数据,如:顶点坐标、纹理坐标、颜色等。//(2)uniforms变量(一致变量)用来将数据值从应用程其序传递到顶点着色器或者片元着色器。该变量有点类似C语言中的常量(const),即该变量的值不能被shader程序修改。一般用该变量表示变换矩阵、光照参数、纹理采样器等。//(3)varying变量(易变变量)是从顶点着色器传递到片元着色器的数据变量。顶点着色器可以使用易变变量来传递需要插值的颜色、法向量、纹理坐标等任意值。在顶点与片元shader程序间传递数据是很容易的,一般在顶点shader中修改varying变量值,然后片元shader中使用该值,当然,该变量在顶点及片元这两段shader程序中声明必须是一致的。例如:下面代码中应用程序中由顶点着色器传入片元着色器中的coord变量。//(4)gl_Position为内建变量,表示变换后点的空间位置。顶点着色器从应用程序中获得原始的顶点位置数据,这些原始的顶点数据在顶点着色器中经过平移、旋转、缩放等数学变换后,生成新的顶点位置。新的顶点位置通过在顶点着色器中写入gl_Position传递到渲染管线的后继阶段继续处理。//顶点着色器// Hardcode the vertex shader for standard filters, but this can be overriddenNSString *const kGPUImageVertexShaderString = SHADER_STRING( attribute vec4 position;// 应用程序传入顶点着色器的顶点位置 attribute vec4 inputTextureCoordinate;// 应用程序传入顶点着色器的顶点纹理坐标 varying vec2 textureCoordinate;// 用于传递给片元着色器的顶点纹理数据 void main() { gl_Position = position; textureCoordinate = inputTextureCoordinate.xy; } );#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE//片元着色器// 此片元着色器的主要功能为根据接收的记录中的 片元纹理坐标的易变变量中的纹理坐标,调用texture2D内建函数从采样器中进行纹理采样,得到此片元的颜色值。最后,将采样到的颜色值传给gl_FragColor内建变量,完成片元的着色。// 片元着色器是一个处理片元值及其相关联数据的可编程单元,片元着色器可执行纹理的访问、颜色的汇总、雾化等操作,每片元执行一次。片元着色器替代了纹理、颜色求和、雾以及Alpha测试,这一部分是需要开发者自己开发的。//// (1)varying指的是从顶点着色器传递到片元着色器的数据变量// (2)gl_FragColor为内置变量,用来保存片元着色器计算完成的片元颜色值,此颜色值将送入渲染管线的后继阶段进行处理。//varying在片元着色器里面表示从顶点着色器传过来的输入参数;//片元着色器不能直接传如参数,只能接收顶点着色器的输出;NSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING( varying highp vec2 textureCoordinate;// 接收从顶点着色器过来的纹理坐标 uniform sampler2D inputImageTexture;// 纹理采样器,代表一幅纹理 也可以说是纹理像素 void main() { //片元的颜色 由“texture2D”函数计算出来,实际上就是按纹理坐标从纹理像素(一副纹理或者认为一张图片)中取样。 gl_FragColor = texture2D(inputImageTexture, textureCoordinate); });#elseNSString *const kGPUImagePassthroughFragmentShaderString = SHADER_STRING( varying vec2 textureCoordinate; uniform sampler2D inputImageTexture; void main() { gl_FragColor = texture2D(inputImageTexture, textureCoordinate); });#endif@implementation GPUImageFilter@synthesize preventRendering = _preventRendering;@synthesize currentlyReceivingMonochromeInput;#pragma mark -#pragma mark Initialization and teardown- (id)initWithVertexShaderFromString:(NSString *)vertexShaderString fragmentShaderFromString:(NSString *)fragmentShaderString;{ if (!(self = [super init])) {return nil; } uniformStateRestorationBlocks = [NSMutableDictionary dictionaryWithCapacity:10]; _preventRendering = NO; currentlyReceivingMonochromeInput = NO; inputRotation = kGPUImageNoRotation; backgroundColorRed = 0.0; backgroundColorGreen = 0.0; backgroundColorBlue = 0.0; backgroundColorAlpha = 0.0; imageCaptureSemaphore = dispatch_semaphore_create(0); dispatch_semaphore_signal(imageCaptureSemaphore); runSynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext useImageProcessingContext]; filterProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:vertexShaderString fragmentShaderString:fragmentShaderString]; if (!filterProgram.initialized) { [self initializeAttributes]; if (![filterProgram link]) { NSString *progLog = [filterProgram programLog]; NSLog(@"Program link log: %@", progLog); NSString *fragLog = [filterProgram fragmentShaderLog]; NSLog(@"Fragment shader compile log: %@", fragLog); NSString *vertLog = [filterProgram vertexShaderLog]; NSLog(@"Vertex shader compile log: %@", vertLog); filterProgram = nil; NSAssert(NO, @"Filter shader link failed"); } } filterPositionAttribute = [filterProgram attributeIndex:@"position"]; filterTextureCoordinateAttribute = [filterProgram attributeIndex:@"inputTextureCoordinate"]; filterInputTextureUniform = [filterProgram uniformIndex:@"inputImageTexture"]; // This does assume a name of "inputImageTexture" for the fragment shader [GPUImageContext setActiveShaderProgram:filterProgram]; glEnableVertexAttribArray(filterPositionAttribute); glEnableVertexAttribArray(filterTextureCoordinateAttribute); }); return self;}- (id)initWithFragmentShaderFromString:(NSString *)fragmentShaderString;{ if (!(self = [self initWithVertexShaderFromString:kGPUImageVertexShaderString fragmentShaderFromString:fragmentShaderString])) {return nil; } return self;}- (id)initWithFragmentShaderFromFile:(NSString *)fragmentShaderFilename;{ NSString *fragmentShaderPathname = [[NSBundle mainBundle] pathForResource:fragmentShaderFilename ofType:@"fsh"]; NSString *fragmentShaderString = [NSString stringWithContentsOfFile:fragmentShaderPathname encoding:NSUTF8StringEncoding error:nil]; if (!(self = [self initWithFragmentShaderFromString:fragmentShaderString])) {return nil; } return self;}- (id)init;{ if (!(self = [self initWithFragmentShaderFromString:kGPUImagePassthroughFragmentShaderString])) {return nil; } return self;}- (void)initializeAttributes;{ [filterProgram addAttribute:@"position"];[filterProgram addAttribute:@"inputTextureCoordinate"]; // Override this, calling back to this super method, in order to add new attributes to your vertex shader}- (void)setupFilterForSize:(CGSize)filterFrameSize;{ // This is where you can override to provide some custom setup, if your filter has a size-dependent element}- (void)dealloc{#if !OS_OBJECT_USE_OBJC if (imageCaptureSemaphore != NULL) { dispatch_release(imageCaptureSemaphore); }#endif}#pragma mark -#pragma mark Still image processing- (void)useNextFrameForImageCapture;{ usingNextFrameForImageCapture = YES; // Set the semaphore high, if it isn't already if (dispatch_semaphore_wait(imageCaptureSemaphore, DISPATCH_TIME_NOW) != 0) { return; }}- (CGImageRef)newCGImageFromCurrentlyProcessedOutput{ // Give it three seconds to process, then abort if they forgot to set up the image capture properly double timeoutForImageCapture = 3.0; dispatch_time_t convertedTimeout = dispatch_time(DISPATCH_TIME_NOW, timeoutForImageCapture * NSEC_PER_SEC); if (dispatch_semaphore_wait(imageCaptureSemaphore, convertedTimeout) != 0) { return NULL; } GPUImageFramebuffer* framebuffer = [self framebufferForOutput]; usingNextFrameForImageCapture = NO; dispatch_semaphore_signal(imageCaptureSemaphore); CGImageRef image = [framebuffer newCGImageFromFramebufferContents]; return image;}#pragma mark -#pragma mark Managing the display FBOs- (CGSize)sizeOfFBO;{ CGSize outputSize = [self maximumOutputSize]; if ( (CGSizeEqualToSize(outputSize, CGSizeZero)) || (inputTextureSize.width < outputSize.width) ) { return inputTextureSize; } else { return outputSize; }}#pragma mark -#pragma mark Rendering//纹理顶点坐标 根据不同的方向设置不同的值+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;{ static const GLfloat noRotationTextureCoordinates[] = { 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, }; static const GLfloat rotateLeftTextureCoordinates[] = { 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, }; static const GLfloat rotateRightTextureCoordinates[] = { 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, }; static const GLfloat verticalFlipTextureCoordinates[] = { 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, }; static const GLfloat horizontalFlipTextureCoordinates[] = { 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, }; static const GLfloat rotateRightVerticalFlipTextureCoordinates[] = { 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, }; static const GLfloat rotateRightHorizontalFlipTextureCoordinates[] = { 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, }; static const GLfloat rotate180TextureCoordinates[] = { 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, }; switch(rotationMode) { case kGPUImageNoRotation: return noRotationTextureCoordinates; case kGPUImageRotateLeft: return rotateLeftTextureCoordinates; case kGPUImageRotateRight: return rotateRightTextureCoordinates; case kGPUImageFlipVertical: return verticalFlipTextureCoordinates; case kGPUImageFlipHorizonal: return horizontalFlipTextureCoordinates; case kGPUImageRotateRightFlipVertical: return rotateRightVerticalFlipTextureCoordinates; case kGPUImageRotateRightFlipHorizontal: return rotateRightHorizontalFlipTextureCoordinates; case kGPUImageRotate180: return rotate180TextureCoordinates; }}- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;{ if (self.preventRendering) { [firstInputFramebuffer unlock]; return; } [GPUImageContext setActiveShaderProgram:filterProgram]; outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:[self sizeOfFBO] textureOptions:self.outputTextureOptions onlyTexture:NO]; [outputFramebuffer activateFramebuffer]; if (usingNextFrameForImageCapture) { [outputFramebuffer lock]; } [self setUniformsForProgramAtIndex:0]; glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha); glClear(GL_COLOR_BUFFER_BIT);glActiveTexture(GL_TEXTURE2);glBindTexture(GL_TEXTURE_2D, [firstInputFramebuffer texture]);glUniform1i(filterInputTextureUniform, 2); glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); [firstInputFramebuffer unlock]; if (usingNextFrameForImageCapture) { dispatch_semaphore_signal(imageCaptureSemaphore); }}- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;{ if (self.frameProcessingCompletionBlock != NULL) { self.frameProcessingCompletionBlock(self, frameTime); } // Get all targets the framebuffer so they can grab a lock on it for (id<GPUImageInput> currentTarget in targets) { if (currentTarget != self.targetToIgnoreForUpdates) { NSInteger indexOfObject = [targets indexOfObject:currentTarget]; NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue]; [self setInputFramebufferForTarget:currentTarget atIndex:textureIndex]; [currentTarget setInputSize:[self outputFrameSize] atIndex:textureIndex]; } } // Release our hold so it can return to the cache immediately upon processing [[self framebufferForOutput] unlock]; if (usingNextFrameForImageCapture) {// usingNextFrameForImageCapture = NO; } else { [self removeOutputFramebuffer]; } // Trigger processing last, so that our unlock comes first in serial execution, avoiding the need for a callback for (id<GPUImageInput> currentTarget in targets) { if (currentTarget != self.targetToIgnoreForUpdates) { NSInteger indexOfObject = [targets indexOfObject:currentTarget]; NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue]; [currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndex]; } }}- (CGSize)outputFrameSize;{ return inputTextureSize;}#pragma mark -#pragma mark Input parameters- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;{ backgroundColorRed = redComponent; backgroundColorGreen = greenComponent; backgroundColorBlue = blueComponent; backgroundColorAlpha = alphaComponent;}- (void)setInteger:(GLint)newInteger forUniformName:(NSString *)uniformName;{ GLint uniformIndex = [filterProgram uniformIndex:uniformName]; [self setInteger:newInteger forUniform:uniformIndex program:filterProgram];}- (void)setFloat:(GLfloat)newFloat forUniformName:(NSString *)uniformName;{ GLint uniformIndex = [filterProgram uniformIndex:uniformName]; [self setFloat:newFloat forUniform:uniformIndex program:filterProgram];}- (void)setSize:(CGSize)newSize forUniformName:(NSString *)uniformName;{ GLint uniformIndex = [filterProgram uniformIndex:uniformName]; [self setSize:newSize forUniform:uniformIndex program:filterProgram];}- (void)setPoint:(CGPoint)newPoint forUniformName:(NSString *)uniformName;{ GLint uniformIndex = [filterProgram uniformIndex:uniformName]; [self setPoint:newPoint forUniform:uniformIndex program:filterProgram];}- (void)setFloatVec3:(GPUVector3)newVec3 forUniformName:(NSString *)uniformName;{ GLint uniformIndex = [filterProgram uniformIndex:uniformName]; [self setVec3:newVec3 forUniform:uniformIndex program:filterProgram];}- (void)setFloatVec4:(GPUVector4)newVec4 forUniform:(NSString *)uniformName;{ GLint uniformIndex = [filterProgram uniformIndex:uniformName]; [self setVec4:newVec4 forUniform:uniformIndex program:filterProgram];}- (void)setFloatArray:(GLfloat *)array length:(GLsizei)count forUniform:(NSString*)uniformName{ GLint uniformIndex = [filterProgram uniformIndex:uniformName]; [self setFloatArray:array length:count forUniform:uniformIndex program:filterProgram];}- (void)setMatrix3f:(GPUMatrix3x3)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ glUniformMatrix3fv(uniform, 1, GL_FALSE, (GLfloat *)&matrix); }]; });}- (void)setMatrix4f:(GPUMatrix4x4)matrix forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ glUniformMatrix4fv(uniform, 1, GL_FALSE, (GLfloat *)&matrix); }]; });}- (void)setFloat:(GLfloat)floatValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ glUniform1f(uniform, floatValue); }]; });}- (void)setPoint:(CGPoint)pointValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ GLfloat positionArray[2]; positionArray[0] = pointValue.x; positionArray[1] = pointValue.y; glUniform2fv(uniform, 1, positionArray); }]; });}- (void)setSize:(CGSize)sizeValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ GLfloat sizeArray[2]; sizeArray[0] = sizeValue.width; sizeArray[1] = sizeValue.height; glUniform2fv(uniform, 1, sizeArray); }]; });}- (void)setVec3:(GPUVector3)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ glUniform3fv(uniform, 1, (GLfloat *)&vectorValue); }]; });}- (void)setVec4:(GPUVector4)vectorValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ glUniform4fv(uniform, 1, (GLfloat *)&vectorValue); }]; });}- (void)setFloatArray:(GLfloat *)arrayValue length:(GLsizei)arrayLength forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ // Make a copy of the data, so it doesn't get overwritten before async call executes NSData* arrayData = [NSData dataWithBytes:arrayValue length:arrayLength * sizeof(arrayValue[0])]; runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ glUniform1fv(uniform, arrayLength, [arrayData bytes]); }]; });}- (void)setInteger:(GLint)intValue forUniform:(GLint)uniform program:(GLProgram *)shaderProgram;{ runAsynchronouslyOnVideoProcessingQueue(^{ [GPUImageContext setActiveShaderProgram:shaderProgram]; [self setAndExecuteUniformStateCallbackAtIndex:uniform forProgram:shaderProgram toBlock:^{ glUniform1i(uniform, intValue); }]; });}- (void)setAndExecuteUniformStateCallbackAtIndex:(GLint)uniform forProgram:(GLProgram *)shaderProgram toBlock:(dispatch_block_t)uniformStateBlock;{ [uniformStateRestorationBlocks setObject:[uniformStateBlock copy] forKey:[NSNumber numberWithInt:uniform]]; uniformStateBlock();}- (void)setUniformsForProgramAtIndex:(NSUInteger)programIndex;{ [uniformStateRestorationBlocks enumerateKeysAndObjectsUsingBlock:^(id key, id obj, BOOL *stop){ dispatch_block_t currentBlock = obj; currentBlock(); }];}#pragma mark -#pragma mark GPUImageInput- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;{ static const GLfloat imageVertices[] = { -1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, }; [self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]]; [self informTargetsAboutNewFrameAtTime:frameTime];}- (NSInteger)nextAvailableTextureIndex;{ return 0;}- (void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;{ firstInputFramebuffer = newInputFramebuffer; [firstInputFramebuffer lock];}- (CGSize)rotatedSize:(CGSize)sizeToRotate forIndex:(NSInteger)textureIndex;{ CGSize rotatedSize = sizeToRotate; if (GPUImageRotationSwapsWidthAndHeight(inputRotation)) { rotatedSize.width = sizeToRotate.height; rotatedSize.height = sizeToRotate.width; } return rotatedSize; }- (CGPoint)rotatedPoint:(CGPoint)pointToRotate forRotation:(GPUImageRotationMode)rotation;{ CGPoint rotatedPoint; switch(rotation) { case kGPUImageNoRotation: return pointToRotate; break; case kGPUImageFlipHorizonal: { rotatedPoint.x = 1.0 - pointToRotate.x; rotatedPoint.y = pointToRotate.y; }; break; case kGPUImageFlipVertical: { rotatedPoint.x = pointToRotate.x; rotatedPoint.y = 1.0 - pointToRotate.y; }; break; case kGPUImageRotateLeft: { rotatedPoint.x = 1.0 - pointToRotate.y; rotatedPoint.y = pointToRotate.x; }; break; case kGPUImageRotateRight: { rotatedPoint.x = pointToRotate.y; rotatedPoint.y = 1.0 - pointToRotate.x; }; break; case kGPUImageRotateRightFlipVertical: { rotatedPoint.x = pointToRotate.y; rotatedPoint.y = pointToRotate.x; }; break; case kGPUImageRotateRightFlipHorizontal: { rotatedPoint.x = 1.0 - pointToRotate.y; rotatedPoint.y = 1.0 - pointToRotate.x; }; break; case kGPUImageRotate180: { rotatedPoint.x = 1.0 - pointToRotate.x; rotatedPoint.y = 1.0 - pointToRotate.y; }; break; } return rotatedPoint;}- (void)setInputSize:(CGSize)newSize atIndex:(NSInteger)textureIndex;{ if (self.preventRendering) { return; } if (overrideInputSize) { if (CGSizeEqualToSize(forcedMaximumSize, CGSizeZero)) { } else { CGRect insetRect = AVMakeRectWithAspectRatioInsideRect(newSize, CGRectMake(0.0, 0.0, forcedMaximumSize.width, forcedMaximumSize.height)); inputTextureSize = insetRect.size; } } else { CGSize rotatedSize = [self rotatedSize:newSize forIndex:textureIndex]; if (CGSizeEqualToSize(rotatedSize, CGSizeZero)) { inputTextureSize = rotatedSize; } else if (!CGSizeEqualToSize(inputTextureSize, rotatedSize)) { inputTextureSize = rotatedSize; } } [self setupFilterForSize:[self sizeOfFBO]];}- (void)setInputRotation:(GPUImageRotationMode)newInputRotation atIndex:(NSInteger)textureIndex;{ inputRotation = newInputRotation;}- (void)forceProcessingAtSize:(CGSize)frameSize;{ if (CGSizeEqualToSize(frameSize, CGSizeZero)) { overrideInputSize = NO; } else { overrideInputSize = YES; inputTextureSize = frameSize; forcedMaximumSize = CGSizeZero; }}- (void)forceProcessingAtSizeRespectingAspectRatio:(CGSize)frameSize;{ if (CGSizeEqualToSize(frameSize, CGSizeZero)) { overrideInputSize = NO; inputTextureSize = CGSizeZero; forcedMaximumSize = CGSizeZero; } else { overrideInputSize = YES; forcedMaximumSize = frameSize; }}- (CGSize)maximumOutputSize;{ // I'm temporarily disabling adjustments for smaller output sizes until I figure out how to make this work better return CGSizeZero; /* if (CGSizeEqualToSize(cachedMaximumOutputSize, CGSizeZero)) { for (id<GPUImageInput> currentTarget in targets) { if ([currentTarget maximumOutputSize].width > cachedMaximumOutputSize.width) { cachedMaximumOutputSize = [currentTarget maximumOutputSize]; } } } return cachedMaximumOutputSize; */}- (void)endProcessing { if (!isEndProcessing) { isEndProcessing = YES; for (id<GPUImageInput> currentTarget in targets) { [currentTarget endProcessing]; } }}- (BOOL)wantsMonochromeInput;{ return NO;}#pragma mark -#pragma mark Accessors@end
0 0
- GPUImage 滤波算法详解
- 滤波算法
- 滤波算法
- 滤波算法
- 滤波算法
- 滤波算法
- 滤波算法
- GPUImage
- GPUImage
- GPUImage
- GPUImage
- GPUImage
- GPUImage
- GPUImage
- GPUImage
- GPUimage
- GPUImage
- GPUImage
- 体系结构模式
- web.xml中load-on-startup的作用
- SpringMVC 重定向
- POJ 1696 Space Ant (极角排序、凸包卷包裹(GiftWrapping)算法)
- 对于PHPCMSV9 phpsso通信失败解决方法
- GPUImage 滤波算法详解
- UGUI让模型显示在UI前面的设置
- http
- Linux多线程编程(初步)
- 设计模式:各个模式间的对比
- 关于scanf与printf里的%*d
- Ubuntu 16.04下搭建 Android 开发环境 -JDK, Android Studio 安装
- MYSQL的使用
- 在java项目中使用log4j的实例