cocos2dx 2.0字体描边(based on shader)
来源:互联网 发布:flame painter mac 编辑:程序博客网 时间:2024/05/13 11:20
从论坛上面看到的,没试过效果,先收着。
http://stackoverflow.com/questions/12469990/simple-glsl-convolution-shader-is-atrociously-slow
I've done this exact thing myself, and I see several things that could be optimized here.
First off, I'd remove the enableTexture
conditional and instead split your shader into two programs, one for the true state of this and one for false. Conditionals are very expensive in iOS fragment shaders, particularly ones that have texture reads within them.
Second, you have nine dependent texture reads here. These are texture reads where the texture coordinates are calculated within the fragment shader. Dependent texture reads are very expensive on the PowerVR GPUs within iOS devices, because they prevent that hardware from optimizing texture reads using caching, etc. Because you are sampling from a fixed offset for the 8 surrounding pixels and one central one, these calculations should be moved up into the vertex shader. This also means that these calculations won't have to be performed for each pixel, just once for each vertex and then hardware interpolation will handle the rest.
Third, for() loops haven't been handled all that well by the iOS shader compiler to date, so I tend to avoid those where I can.
As I mentioned, I've done convolution shaders like this in my open source iOS GPUImage framework. For a generic convolution filter, I use the following vertex shader:
attribute vec4 position; attribute vec4 inputTextureCoordinate; uniform highp float texelWidth; uniform highp float texelHeight; varying vec2 textureCoordinate; varying vec2 leftTextureCoordinate; varying vec2 rightTextureCoordinate; varying vec2 topTextureCoordinate; varying vec2 topLeftTextureCoordinate; varying vec2 topRightTextureCoordinate; varying vec2 bottomTextureCoordinate; varying vec2 bottomLeftTextureCoordinate; varying vec2 bottomRightTextureCoordinate; void main() { gl_Position = position; vec2 widthStep = vec2(texelWidth, 0.0); vec2 heightStep = vec2(0.0, texelHeight); vec2 widthHeightStep = vec2(texelWidth, texelHeight); vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight); textureCoordinate = inputTextureCoordinate.xy; leftTextureCoordinate = inputTextureCoordinate.xy - widthStep; rightTextureCoordinate = inputTextureCoordinate.xy + widthStep; topTextureCoordinate = inputTextureCoordinate.xy - heightStep; topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep; topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep; bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep; bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep; bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep; }
and the following fragment shader:
uniform mediump mat3 convolutionMatrix; varying vec2 textureCoordinate; varying vec2 leftTextureCoordinate; varying vec2 rightTextureCoordinate; varying vec2 topTextureCoordinate; varying vec2 topLeftTextureCoordinate; varying vec2 topRightTextureCoordinate; varying vec2 bottomTextureCoordinate; varying vec2 bottomLeftTextureCoordinate; varying vec2 bottomRightTextureCoordinate; void main() { mediump vec4 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate); mediump vec4 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate); mediump vec4 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate); mediump vec4 centerColor = texture2D(inputImageTexture, textureCoordinate); mediump vec4 leftColor = texture2D(inputImageTexture, leftTextureCoordinate); mediump vec4 rightColor = texture2D(inputImageTexture, rightTextureCoordinate); mediump vec4 topColor = texture2D(inputImageTexture, topTextureCoordinate); mediump vec4 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate); mediump vec4 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate); mediump vec4 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2]; resultColor += leftColor * convolutionMatrix[1][0] + centerColor * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2]; resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2]; gl_FragColor = resultColor; }
The texelWidth
and texelHeight
uniforms are the inverse of the width and height of the input image, and the convolutionMatrix
uniform specifies the weights for the various samples in your convolution.
On an iPhone 4, this runs in 4-8 ms for a 640x480 frame of camera video, which is good enough for 60 FPS rendering at that image size. If you just need to do something like edge detection, you can simplify the above, convert the image to luminance in a pre-pass, then only sample from one color channel. That's even faster, at about 2 ms per frame on the same device.
- cocos2dx 2.0字体描边(based on shader)
- cocos2dx shader
- cocos2d-x之字体描边效果shader实现
- Node Based Shader
- cocos2dx 字体
- Based Off Versus Based On
- cocos2dx 水波纹Shader
- cocos2dx:cocos之Shader
- cocos2dx Shader的使用
- cocos2dx使用自定义shader
- Cocos2dx Shader学习
- TypeSelect based on TypeTraits
- webkit based on win
- Spider based on scrapy
- iperf based on linux
- Dash Based On Plotly
- Based on or Basing on, 为何写作多用 Based on?
- content based routing based on header value
- C++ 继承中的隐藏与覆盖祥解
- 为什么C++编译器不能支持对模板的分离式编译
- 去除FireFox中点击链接的时候,出现的虚线的边框框
- C++11信息列表 长期维护
- linux挂载windows共享目录命令
- cocos2dx 2.0字体描边(based on shader)
- delphi 获取指定文件路径中的文件名及扩展名
- boost锁介绍
- WTK Can't load IA 32-bit .dll on a AMD 64-bit platform
- DataGridViewRowCollection.SharedRow 方法
- 黑马程序员_毕向东_JavaScript视频教程(4)
- SQL查询所有表,字段名,主键,类型,长度,小数位数,允许空,默认值,字段说明
- NC57单据执行方法以及常用的操作
- IOS开发(22)之生成IPA文件并安装到越狱后的真机上