Oculus VR SDK实现-Oculus针对双眼显示的交换链设计

来源:互联网 发布:mastercam编程培训 编辑:程序博客网 时间:2024/05/20 00:35

目录

一、创建交换链
二、创建DepthBuffer和FrameBuffer
三、基于交换链对Buffer的使用

一、创建交换链

首先我们关注一下ovrFramebuffer这个结构体:

typedef struct {      int Width;      int Height;      int Multisamples;      int TextureSwapChainLength;      int TextureSwapChainIndex;      ovrTextureSwapChain * ColorTextureSwapChain;      GLuint * DepthBuffers;      GLuint * FrameBuffers;} ovrFramebuffer;

这个结构体描述了oculus的Framebuffer包含的信息:
framebuffer的宽、高、采样率,交换链长度,当前正在使用的交换链索引,交换链地址指针。以及一个DepthBuffer和FrameBuffer的指针。

然后看下创建交换链的代码实现:
 //创建一个Texture,类型是2D,格式是colorFormat      frameBuffer->ColorTextureSwapChain = vrapi_CreateTextureSwapChain(                  VRAPI_TEXTURE_TYPE_2D, colorFormat, width, height, 1, true);

这个接口从参数上来看是创建了一个交换链,并将交换链句柄传递出来,交给frameBuffer->ColorTextureSwapChain.
根据汇编代码,glTexStorage2D和glTexStorage3D推断这里使用的是不可变纹理
即在加载数据之前指定纹理的格式和大小。这样可以预先进行所有一致性和内存检查
一旦纹理不可变,它的格式和大小就不会再变化,但应用仍然可以通过使用glTexSubImage2D
,glTexSubImage3D,glGennerateMipMap或者渲染到纹理加载图像数据

即创建了两个不可变纹理,并把存储这两个纹理地址的数组转换成ovrTextureSwapChain传递出去。
用图形表示,就是如下:

至于这些texture的格式和大小:
由http://blog.csdn.net/d_uanrock/article/details/48209533中对SuggestedEyeResolutionWidth以及SuggestedEyeResolutionHeight
的分析:

其中的ovrHmdInfo是VR 头盔(眼镜)相关的一个结构体,包含,分辨率,刷新率,默认分辨率,和水平垂直视野角度。

显示一幅分辨率为2560*2440的图像的时候,每个像素在眼睛中所占用的视角约为0.06度。展现完整360度就需要6000个像素(360/0.06=6000),展现90度的视角就需要占用其中的1/4。分辨率为1536*1536的图像能够产生1:1的图像。但是对于偏离中心的像素,需要提供大密度投影。考虑到不需要大密度投影的情况和更好的表现,这里提供了默认值1024*1024.
可以知道程序中默认的texture 对应Buffer的大小为1024x1024
接口vrapi_GetTextureSwapChainLength就是用于获取交换链长度,也就是交换链个数,这里是2

二、创建DepthBuffer和FrameBuffer

后面创建DepthBuffer和FrameBuffer的时候都用到了这个长度。
      frameBuffer->DepthBuffers = (GLuint *) malloc(                  frameBuffer->TextureSwapChainLength * sizeof(GLuint));      frameBuffer->FrameBuffers = (GLuint *) malloc(                  frameBuffer->TextureSwapChainLength * sizeof(GLuint));
可以看到,这里只分配了2*sizeof(GLuint)的很小的空间,因为和前面的textureSwapChain一样,这里的的指针指向的也只是一个存储地址的数组。
分配了存储地址的数组之后,就开始通过glGenRenderbuffers实际为DepthBuffer和FrameBuffer分配空间:

                 //创建渲染缓冲区buffer,12章,P233,glGenRenderbuffers作用是分配n个渲染缓冲区对象名称                  //这里分配了1个渲染缓冲区名称存储到frameBuffer->DepthBuffers[i]中,名称是不为0的无符号整数                  // Create multisampled depth buffer.                  GL(glGenRenderbuffers(1, &frameBuffer->DepthBuffers[i]));                  //指定渲染缓冲区,第一个值必须指定为GL_RENDERBUFFER                  GL(                              glBindRenderbuffer(GL_RENDERBUFFER,                                          frameBuffer->DepthBuffers[i]));                  GL(                              glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER,                                          multisamples, GL_DEPTH_COMPONENT24, width, height));                  GL(glBindRenderbuffer(GL_RENDERBUFFER, 0));                  //创建帧缓冲区buffer                  // Create the frame buffer.                  //创建framebuffer                  GL(glGenFramebuffers(1, &frameBuffer->FrameBuffers[i]));                  GL(glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer->FrameBuffers[i]));                  GL(glFramebufferTexture2DMultisampleEXT(GL_FRAMEBUFFER,                                          GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, colorTexture,                                          0, multisamples));                  GL(glFramebufferRenderbuffer(GL_FRAMEBUFFER,                                          GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER,                                          frameBuffer->DepthBuffers[i]));                  GL(GLenum renderFramebufferStatus = glCheckFramebufferStatus( GL_FRAMEBUFFER ));                  GL(glBindFramebuffer(GL_FRAMEBUFFER, 0));                  if (renderFramebufferStatus != GL_FRAMEBUFFER_COMPLETE) {                        ALOGE("Incomplete frame buffer object: %s",                                    EglFrameBufferStatusString(renderFramebufferStatus));                        return false;                  }
这里,为DepthBuffer和FrameBuffer分配了空间,并进行了相应的初始化。
这里要理解一下:glFramebufferTexture2D和glFramebufferRenderbuffer
根据
http://www.songho.ca/opengl/gl_fbo.html
中的解释:
 FBO provides glFramebufferTexture2D() to switch 2D texture objects, and glFramebufferRenderbuffer() to switch renderbuffer objects
说明,这里是将一个2Dtexture 对象和renderbuffer对象绑定到FBO上。
好了,到这里。再次梳理一下现在的数据结构:




注意,这里的FrameBuffer其实就是一个FBO了。其中包含的是一系列的附着点。
因此,这里在每一个FrameBuffer的GL_COLOR_ATTACHMENT0上绑定了一个texture,GL_DEPTH_ATTACHMENT上绑定了一个DepthBuffer
(可以再回头分析一下链接里的那篇文章)
因此现在的数据结构图就是下面这样的了:

三、基于交换链对Buffer的使用


了解完这一部分,我们现在来看下,程序是如何基于交换链对这些数据结构进行使用的:

 
            ovrFramebuffer_Resolve(frameBuffer);            parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].ColorTextureSwapChain =                        frameBuffer->ColorTextureSwapChain;            parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].TextureSwapChainIndex =                        frameBuffer->TextureSwapChainIndex;            parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].TexCoordsFromTanAngles =                        renderer->TexCoordsTanAnglesMatrix;       //纹理坐标            parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].HeadPose =                        updatedTracking.HeadPose;            ovrFramebuffer_Advance(frameBuffer);
因为要画的是左眼和右眼的画面。所以这里有左右眼对应的数据结构所在的对象params的类型ovrFrameParms
typedef struct{      // Layers composited in the time warp.      ovrFrameLayer                 Layers[VRAPI_FRAME_LAYER_TYPE_MAX];      int                                 LayerCount;      // Combination of ovrFrameOption flags.      int                           WarpOptions;      // Which program to run with these layers.      ovrFrameProgram               WarpProgram;      // Program-specific tuning values.      float                         ProgramParms[4];      // Application controlled frame index that uniquely identifies this particular frame.      // This must be the same frame index that was passed to vrapi_GetPredictedDisplayTime()      // when synthesis of this frame started.      long long                     FrameIndex;      // WarpSwap will not return until at least this many V-syncs have      // passed since the previous WarpSwap returned.      // Setting to 2 will reduce power consumption and may make animation      // more regular for applications that can't hold full frame rate.      int                                 MinimumVsyncs;      // Latency Mode.      ovrExtraLatencyMode           ExtraLatencyMode;      // Rotation from a joypad can be added on generated frames to reduce      // judder in FPS style experiences when the application framerate is      // lower than the V-sync rate.      // This will be applied to the view space distorted      // eye vectors before applying the rest of the time warp.      // This will only be added when the same ovrFrameParms is used for      // more than one V-sync.      ovrMatrix4f                   ExternalVelocity;      // jobject that will be updated before each eye for minimal      // latency with VRAPI_FRAME_PROGRAM_MASKED_PLANE_EXTERNAL.      // IMPORTANT: This should be a JNI weak reference to the object.      // The system will try to convert it into a global reference before      // calling SurfaceTexture->Update, which allows it to be safely      // freed by the application.      jobject                             SurfaceTextureObject;      // CPU/GPU performance parameters.      ovrPerformanceParms           PerformanceParms;      // For handling HMD events and power level state changes.      ovrJava                             Java;} ovrFrameParms;

从http://blog.csdn.net/d_uanrock/article/details/48209533中对这一部分的分析结合实现代码。可以知道
左右眼都对应着有一组数据结构ovrFrameLayerTexture
typedef struct{      // Because OpenGL ES does not support clampToBorder, it is the      // application's responsibility to make sure that all mip levels      // of the primary eye texture have a black border that will show      // up when time warp pushes the texture partially off screen.      //CLAMP_TO_BORDER causes OpenGL to only take the border color at the edge of the texture      //rather than the average of the border color and texture edge texels.      //This allows for a perfect border around the texture.      ovrTextureSwapChain *   ColorTextureSwapChain;      // The depth texture is optional for positional time warp.      ovrTextureSwapChain *   DepthTextureSwapChain;      // Index to the texture from the set that should be displayed.      int                                 TextureSwapChainIndex;      // Points on the screen are mapped by a distortion correction      // function into ( TanX, TanY, -1, 1 ) vectors that are transformed      // by this matrix to get ( S, T, Q, _ ) vectors that are looked      // up with texture2dproj() to get texels.      ovrMatrix4f                   TexCoordsFromTanAngles;      // Only texels within this range should be drawn.      // This is a sub-rectangle of the [(0,0)-(1,1)] texture coordinate range.      ovrRectf                      TextureRect;      // The tracking state for which ModelViewMatrix is correct.      // It is ok to update the orientation for each eye, which      // can help minimize black edge pull-in, but the position      // must remain the same for both eyes, or the position would      // seem to judder "backwards in time" if a frame is dropped.      ovrRigidBodyPosef       HeadPose;} ovrFrameLayerTexture;
当处理左眼的时候,就更新左眼的这个数据结构的值,当处理右眼的时候,就处理右眼的这个数据结构的值。
结合上面的数据结构总图,可以知道,程序的设计师左眼用一组FrameBuffer,右眼用一组FrameBuffer。
1.计算要绘制的数据及左右眼视角矩阵
2.绑定交换链中的一个Framebuffer
static void ovrFramebuffer_SetCurrent(ovrFramebuffer * frameBuffer) {      GL(glBindFramebuffer(GL_FRAMEBUFFER,frameBuffer->FrameBuffers[frameBuffer->TextureSwapChainIndex]));}

3.绘制图像到绑定的FrameBuffer
4.解绑FrameBuffer

static void ovrFramebuffer_Resolve(ovrFramebuffer * frameBuffer) {      // Discard the depth buffer, so the tiler won't need to write it back out to memory.      const GLenum depthAttachment[1] = { GL_DEPTH_ATTACHMENT };      glInvalidateFramebuffer(GL_FRAMEBUFFER, 1, depthAttachment);      // Flush this frame worth of commands.      glFlush();}
5.更新交换链
   parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].ColorTextureSwapChain =                        frameBuffer->ColorTextureSwapChain;            parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].TextureSwapChainIndex =                        frameBuffer->TextureSwapChainIndex;            parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].TexCoordsFromTanAngles =                        renderer->TexCoordsTanAnglesMatrix;       //纹理坐标            parms.Layers[VRAPI_FRAME_LAYER_TYPE_WORLD].Textures[eye].HeadPose =                        updatedTracking.HeadPose;

6.移动到下一帧要使用的交换链索引
static void ovrFramebuffer_Advance(ovrFramebuffer * frameBuffer) {      // Advance to the next texture from the set.      frameBuffer->TextureSwapChainIndex =                  (frameBuffer->TextureSwapChainIndex + 1)                              % frameBuffer->TextureSwapChainLength;}
每一帧都不停地循环1到6这几个步骤,便实现了,一帧画左眼,一帧画右眼的效果。

本篇主要将了Oculus是在OpenGL层面上对纹理交换链做了哪些数据结构的设计,如何基于交换链实现左右眼画面的显示的。

阅读全文
0 0