IOS Dev Intro - Render YpCbCr

来源:互联网 发布:mac怎么切换苹果系统 编辑:程序博客网 时间:2024/05/16 05:42


http://stackoverflow.com/questions/6432159/render-ypcbcr-iphone-4-camera-frame-to-an-opengl-es-2-0-texture-in-iOS-4-3


I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:

[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]                                                           forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

and I'm passing the pixel data to my texture like this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));

My fragement shaders is:

varying highp vec2 textureCoordinate;uniform sampler2D videoFrame; void main() {     lowp vec4 color;     color = texture2D(videoFrame, textureCoordinate);     lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);     convertedColor += 1.164 * color.g; // Y    convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U    convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V    gl_FragColor = vec4(convertedColor, 1.0); } 

and my vertex shader is

attribute vec4 position;attribute vec4 inputTextureCoordinate;varying vec2 textureCoordinate;void main(){    gl_Position = position;    textureCoordinate = inputTextureCoordinate.xy;}

This works just fine when I'm working with an BGRA image, and my fragment shader only does

gl_FragColor = texture2D(videoFrame, textureCoordinate);

What if anything am I missing here? Thanks!

shareimprove this question
 

2 Answers

activeoldestvotes
up vote8down voteaccepted

OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:

#ifdef GL_ESprecision mediump float;#endifvarying vec2 textureCoordinate;uniform sampler2D videoFrame; uniform sampler2D videoFrameUV;const mat3 yuv2rgb = mat3(                            1, 0, 1.2802,                            1, -0.214821, -0.380589,                            1, 2.127982, 0                            );void main() {        vec3 yuv = vec3(                    1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),                    texture2D(videoFrameUV, textureCoordinate).r - 0.5,                    texture2D(videoFrameUV, textureCoordinate).a - 0.5                    );    vec3 rgb = yuv * yuv2rgb;    gl_FragColor = vec4(rgb, 1.0);} 

You'll need to create your textures along like this:

int bufferHeight = CVPixelBufferGetHeight(cameraFrame);int bufferWidth = CVPixelBufferGetWidth(cameraFrame);glBindTexture(GL_TEXTURE_2D, videoFrameTexture);glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));

and then pass them like this:

glActiveTexture(GL_TEXTURE0);glBindTexture(GL_TEXTURE_2D, videoFrameTexture);glActiveTexture(GL_TEXTURE1);glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);glActiveTexture(GL_TEXTURE0);glUniform1i(videoFrameUniform, 0);glUniform1i(videoFrameUniformUV, 1);

Boy am I relieved!

P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.

shareimprove this answer
 
 
This seems to work for YUV, but the shader is not converting to RGB. There is some color, but it's not RGB. – Dex Sep 9 '11 at 22:26
 
Dex, what is your camera format set to? – davidbitton Sep 15 '11 at 20:02
 
I''ve cut and paste code and I'm using camera format kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange– Dex Sep 15 '11 at 23:47
1 
@davidbitton I am trying to implement the solution you provided to solve a similar issue, drawing decoded YUV frames in openGL, but all that is drawn is a solid red or black frame, depending on whether or not I setup the frame buffer. Any thoughts as to what I am missing? – jmason Sep 13 '12 at 15:47
1 
@jmason a lot of this has changed, especially in iOS 6. Look for the RoseyWriter sample app from Apple. It has all that you need. I was unaware of it at the time. – davidbitton Sep 14 '12 at 17:29
up vote2down vote

Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.

Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).

The 420 "bi planar" output is structued something like this:

  1. A header telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane()and friends)
  2. The Y plane: height × bytes_per_row_1 × 1 bytes
  3. The Cb,Cr plane: height/2 × bytes_per_row_2 × 2 bytes (2 bytes per 2x2 block).

bytes_per_row_1 is approximately width and bytes_per_row_2 is approximately width/2, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).

You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().

Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.

shareimprove this answer
 
 
According to the APPLE_rgb_422 link spec, Y, Cr, and Cb are directly layed into GBR color channels. Then I gleaned conversion code from the net. Here is another approach, however it doesn't work either: – davidbitton Jun 22 '11 at 5:06
 
precision mediump float; varying vec2 textureCoordinate; uniform sampler2D videoFrame; const mat3 yuv2rgb = mat3( 1.0, 0.0, 1.4030, 1.0, -.3440, -.7140, 1.0, 1.7720, 0.0 ); void main() { vec4 color = texture2D(videoFrame, textureCoordinate); float y = color.g; float u = color.b - .5; float v = color.r - .5; vec3 rgb = yuv2rgb * vec3(y, u, v); gl_FragColor = vec4(rgb, 1); } – davidbitton Jun 22 '11 at 5:07
 
Pity; kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange is not 422. – tc. Jun 23 '11 at 23:36
 
Saw that. However, what about using GL_LUMINANCE and GL_LUMINANCE_ALPHA? I'm not familiar with passing two textures into a shader. – davidbitton Jun 27 '11 at 17:43
 
My advice is to just stick to BGRA, since it works on all devices (the iPhone 3G additionally does yuvs/2vuy, one of which is "native") – tc. Jun 27 '11 at 22:56

0 0