cocos2dx 2.0字体描边(based on shader)

来源:互联网 发布:flame painter mac 编辑:程序博客网 时间:2024/05/13 11:20

从论坛上面看到的,没试过效果,先收着。

http://stackoverflow.com/questions/12469990/simple-glsl-convolution-shader-is-atrociously-slow

I've done this exact thing myself, and I see several things that could be optimized here.

First off, I'd remove the enableTexture conditional and instead split your shader into two programs, one for the true state of this and one for false. Conditionals are very expensive in iOS fragment shaders, particularly ones that have texture reads within them.

Second, you have nine dependent texture reads here. These are texture reads where the texture coordinates are calculated within the fragment shader. Dependent texture reads are very expensive on the PowerVR GPUs within iOS devices, because they prevent that hardware from optimizing texture reads using caching, etc. Because you are sampling from a fixed offset for the 8 surrounding pixels and one central one, these calculations should be moved up into the vertex shader. This also means that these calculations won't have to be performed for each pixel, just once for each vertex and then hardware interpolation will handle the rest.

Third, for() loops haven't been handled all that well by the iOS shader compiler to date, so I tend to avoid those where I can.

As I mentioned, I've done convolution shaders like this in my open source iOS GPUImage framework. For a generic convolution filter, I use the following vertex shader:

attribute vec4 position; attribute vec4 inputTextureCoordinate; uniform highp float texelWidth;  uniform highp float texelHeight;  varying vec2 textureCoordinate; varying vec2 leftTextureCoordinate; varying vec2 rightTextureCoordinate; varying vec2 topTextureCoordinate; varying vec2 topLeftTextureCoordinate; varying vec2 topRightTextureCoordinate; varying vec2 bottomTextureCoordinate; varying vec2 bottomLeftTextureCoordinate; varying vec2 bottomRightTextureCoordinate; void main() {     gl_Position = position;     vec2 widthStep = vec2(texelWidth, 0.0);     vec2 heightStep = vec2(0.0, texelHeight);     vec2 widthHeightStep = vec2(texelWidth, texelHeight);     vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);     textureCoordinate = inputTextureCoordinate.xy;     leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;     rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;     topTextureCoordinate = inputTextureCoordinate.xy - heightStep;     topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;     topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;     bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;     bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;     bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep; }

and the following fragment shader:

 uniform mediump mat3 convolutionMatrix; varying vec2 textureCoordinate; varying vec2 leftTextureCoordinate; varying vec2 rightTextureCoordinate; varying vec2 topTextureCoordinate; varying vec2 topLeftTextureCoordinate; varying vec2 topRightTextureCoordinate; varying vec2 bottomTextureCoordinate; varying vec2 bottomLeftTextureCoordinate; varying vec2 bottomRightTextureCoordinate; void main() {     mediump vec4 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate);     mediump vec4 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate);     mediump vec4 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate);     mediump vec4 centerColor = texture2D(inputImageTexture, textureCoordinate);     mediump vec4 leftColor = texture2D(inputImageTexture, leftTextureCoordinate);     mediump vec4 rightColor = texture2D(inputImageTexture, rightTextureCoordinate);     mediump vec4 topColor = texture2D(inputImageTexture, topTextureCoordinate);     mediump vec4 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate);     mediump vec4 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate);     mediump vec4 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2];     resultColor += leftColor * convolutionMatrix[1][0] + centerColor * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2];     resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2];     gl_FragColor = resultColor; }

The texelWidth and texelHeight uniforms are the inverse of the width and height of the input image, and the convolutionMatrix uniform specifies the weights for the various samples in your convolution.

On an iPhone 4, this runs in 4-8 ms for a 640x480 frame of camera video, which is good enough for 60 FPS rendering at that image size. If you just need to do something like edge detection, you can simplify the above, convert the image to luminance in a pre-pass, then only sample from one color channel. That's even faster, at about 2 ms per frame on the same device.



原创粉丝点击