VR系列——Oculus Rift 开发者指南:三、Oculus Rift的渲染(七)

来源:互联网 发布:看mv的软件 编辑:程序博客网 时间:2024/04/28 14:28

图层

和多个窗口组成监视器画面的原理类似,头戴设备的画面展示也可由多个图层组成。这些图层中至少有一个是由用户的虚拟眼球渲染得到的画面,其他图层可能是HUD层,信息面板,附着于虚拟物体的文字标签,瞄准点等。

每个图层有着不同的分辨率,使用不同的纹理格式,有着不同的视野和尺寸,有可能是单色画面,也可能是立体画面。应用程序也可能被设定当信息没有发生变化时,不对图层纹理进行更新。例如,从最后一帧起信息面板中的文字没有变化的图层,或是一个帧速率较低的视频画中画图层,这两种图层都不需要更新。应用程序可为图层提供细化的图案和高质量的畸变模式,这对提升文字面板的可读性非常有效。

对每一帧来说,所有活跃的图层都是从后往前进行α混合后生成的。图层0是位于底部的图层,图层1在图层0之上,以此类推;即使有提供深度缓冲区,深度缓冲区也不会用作图层交叉检测的依据。

图层的一个明显特征是,每个图层有着不同的分辨率。这使应用程序能适配到较低性能的系统的图层,通过舍弃视觉缓冲区渲染出的虚拟世界的分辨率,但保留了所有基本信息,比如在不同的高分辨率图层中的文字或地图。

以下是部分已有的图层类型:EyeFov标准的“视觉缓冲区”,与之前的SDK版本类似,它是一个立体画面,这个画面是在用户视线位置渲染出的虚拟场景的一部分。尽管视觉缓冲区也可以是单色画面,但这样会使用户感到不适。之前的SDK版本有隐藏的视场角(FOV)和视窗功能,新版本让这两个功能可见,用户若需要可在应用中对每一帧进行修改。 EyeFovDepth结合视觉缓冲区和深度缓冲区的数据进行渲染。现在只有图层0可以是这个类型。
注意:这里所说的深度缓冲区不是用来处理图层间的遮挡问题(Z测试①)。 Quad在虚拟世界中给定了动作和尺寸后,单眼视场中显示的图像是一个矩形。这有利于头戴显示器、文本信息和物体标签的应用。指定一个默认的动作和用户真实世界空间相关联,之后这个图层就会在空间中定格,不会随着用户的头部、身体动作而移动。若想创建一个排除头部动作的图层,需用ovrLayerFlag_HeadLocked标识,下文中有详细使用说明。 Direct直接在帧缓存中显示,主要在代码调试的时候使用。这类图层没有时间异常、畸变、色差等问题,使用头戴式设备查看这类图层图像通常会感到不正确或不舒服。 Disabled合成器不会处理该类型图层,这类被禁用的图层也不会占用内存。我们建议应用能执行基本的裁剪,被禁用的图层将不在视野中显示。当有图层被关闭时,应用不需要再重新打包活跃图层列表;只要将该图层禁用,列表不需要更改。同样的,列表中指向图层的指针可置空。

每个图层类型都能和ovrLayerType类型进行通信,相关结构体能将请求的数据展示在图层中。例如EyeFov图层的型号是ovrLayerType_EyeFov,由ovrLayerEyeFov结构体中的数据来说明。这些结构体是一组相似的参数,但不是所有图层类型都需要参数:

参数 类型 描述 头文件类型
(Header.Type) 枚举ovrLayerType 必须设置所有图层来指定它们是什么类型的。 头文件标记
(Header.Flags) ovrLayerType的设置 更多信息见下文 色彩纹理
(ColorTexture) ovrSwapTextureSet 为图层提供颜色和透明度数据。图层被混合在另一个使用预乘阿尔法的图层里。这使他们能够表达Lerp式混合,相加混合,或两者的组合。层纹理必须是RGBA或BGRA格式的,也许有纹理映射,但不能是矩阵、立方体,或多重采样抗锯齿(MSAA)。如果应用程序希望做多重采样抗锯齿渲染,那么它必须解决中间多重采样抗锯齿的彩色纹理到图层的非多重采样抗锯齿的颜色纹理。 深度纹理
(DepthTexture) ovrSwapTextureSet 为eyefovdepth层类型提供深度数据,并通过位置时间扭曲为图层提供正确的视差。此数据不用于与其他图层的遮挡或交叉。它不需要匹配颜色纹理的分辨率,但是2x或4x 的重采样抗锯齿是允许的。 投影减弱
(ProjectionDesc) ovrTimewarp
ProjectionDesc 提供信息,有关于如何解释DepthTexture为EyeFovDepth层类型所带有的数据。这应该是提取自应用程序中使用ovrtimewarpprojectiondesc _fromprojection公共函数的投影矩阵。 视图
(Viewport) ovrRecti 实际使用的纹理是矩形的,指定在0-1纹理“UV”坐标空间(而不是像素)。在理论上,在该区域外的纹理数据是不可见的。然而,关于纹理采样通常的注意事项是适用的,特别是映射的纹理。这是一个很好的做法,使显示去区域边界RGBA(0,0,0,0)的像素避免溢出,特别是两只眼睛之间的缓冲区并列地填充了相同的纹理。边界的大小取决于确切的使用情况,但在大多数情况下,大约取8像素比较有效。 视野
(Fov) ovrFovPort 用来渲染场景中的一个视野层类型的视图。注意这不控制头戴式设备的显示,它只是简单的描述排序FOV是如何在层级中渲染纹理数据的–排序将适应于任何的用户实际视野。应用程序可以为特殊效果动态地改变视野。减小视野范围也能帮助较慢的机器提高性能,但是它通常在减小视野时减少分辨率更有效。 渲染姿势
(RenderPose) ovrPosef 相机将用来渲染的应用程序,放置在视野层类型中。通常是通过SDK和应用,使用ovr_GetTrackingState和ovr_CalcEyePoses函数来预测。与这种姿势不同的是,在显示时眼睛的实际姿势被排序器应用于视野层上的时间扭曲。 传感器采样时间
(SensorSampleTime) double 应用采样跟踪状态的绝对时间。要获得这个值的典型方法是调用ovr_GetTimeInSeconds后马上调用ovr_GetTrackingState。SDK使用这个值来报告应用程序的性能显运动光子延迟。如果应用程序有多个ovrlayertype_eyefov图层在任何给定的帧中提交,sdk通过这些层选择最低延迟时间。在给定的帧中,如果没有ovrlayertype_eyefov层被提交,SDK将采用ovr_GetTrackingState被同latencymarkerset一起调用的时间段,给ovrTrue作为替代应用的光子延迟时间时。 四元姿势中心
(QuadPoseCenter) ovrPosef 指定一个四层类型的中心点的方向和位置。所提供的方向是垂直于四面的向量。该位置是在现实世界里(不是应用程序的虚拟世界,是用户存在的实际世界)是相对于“零”的位置,由ovr_recenterpose设置,除非ovrlayerflag_headlocked标志被使用。 四元规格
(QuadSize) ovrVector2f 指定一个四元图层类型的宽度和高度。它的位置就如同是真实世界里的米。

带有立体信息的层取2组含大多数参数的集合(所有这些除了四层类型),这些可以用在三种不同的地方:

  • 立体声数据,分离纹理—该应用程序对左右眼提供了不同的ovrSwapTextureSet和视觉窗口。
  • 立体声数据,共享纹理—该应用程序对左右眼提供了相同同的ovrSwapTextureSet,但是视觉窗口不同。这允许应用程序将左、右视图呈现给同一纹理缓冲区。正如上面所讨论,请记得在两者之间添加一个小缓冲区,以防止“溢出”。
  • 单数据—应用程序的左、右眼睛供应相同的ovrSwapTextureSet和相同的视觉窗口。

纹理和视野的大小可能对左右眼各不同,甚至可以有不同的视野。然而,请注意立体声所造成的差距和不适。

Header.Flags域可用于所有的图层,满足以下的逻辑或关系:

  • ovrlayerflag_highquality—代价稍微高一些,但使这个图层有更高质量的路径的排序。这可以使易读性显著提高,特别是使用一个纹理映射的时候;建议使用高频率的图像,例如文本或者图标,并与四元图层一起使用。在典型的虚拟世界图像中,视觉效果相对较小。
  • ovrLayerFlag_TextureOriginAtBottomLeft—一层的纹理的起源被假定为在左上角。然而,一些引擎(尤其是使用OpenGL)喜欢用左下角为原点,他们应该使用这个标志。
  • ovrLayerFlag_HeadLocked—大多数类型的层都有自己的姿势和位置,通过调用ovr_recenterpose定义,指定相对的“零位置”。然而,应用程序可能希望指定相对于用户脸的层姿势。当用户移动他们的头,该层跟着移动。这对用于瞄准或者选择的光栅来说,是很有用的。此标志可用于所有图层类型,虽然在Direct类型上使用时没有效果。

每一帧的结尾,在渲染任意应用程序要更新的ovrSwapTextureSet后,每一层的数据放入相关ovrlayereyefov / ovrlayereyefovdepth / ovrlayerquad / ovrlayerdirect结构。该应用程序创建一个列表的指针指向这些层结构,特别地指向确保为每个结构第一个成员的Header域。然后,应用程序将所需数据构建成ovrviewscaledesc的结构,然后调用ovr_submitframe方法。

// 创建眼睛层ovrLayerEyeFov eyeLayer;eyeLayer.Header.Type = ovrLayerType_EyeFov;eyeLayer.Header.Flags = 0;for ( int eye = 0; eye < 2; eye++ ){eyeLayer.ColorTexture[eye] = EyeBufferSet[eye];eyeLayer.Viewport[eye] = EyeViewport[eye];eyeLayer.Fov[eye] = EyeFov[eye];eyeLayer.RenderPose[eye] = EyePose[eye];}// 创建HUD层,固定到玩家的身体ovrLayerQuad hudLayer;hudLayer.Header.Type = ovrLayerType_Quad;hudLayer.Header.Flags = ovrLayerFlag_HighQuality;hudLayer.ColorTexture = TheHudTextureSet;// 玩家鼻子的前方50cm、下方20cm// 相对于他们的身体固定hudLayer.QuadPoseCenter.Position.x = 0.00f;hudLayer.QuadPoseCenter.Position.y = -0.20f;hudLayer.QuadPoseCenter.Position.z = -0.50f;hudLayer.QuadPoseCenter.Orientation = ovrQuatf();// HUD 宽50cm、高30cm.hudLayer.QuadSize.x = 0.50f;hudLayer.QuadSize.y = 0.30f;// 显示所有的HUD纹理hudLayer.Viewport.Pos.x = 0.0f;hudLayer.Viewport.Pos.y = 0.0f;hudLayer.Viewport.Size.w = 1.0f;hudLayer.Viewport.Size.h = 1.0f;// 层的列表ovrLayerHeader *layerList[2];layerList[0] = &eyeLayer.Header;layerList[1] = &hudLayer.Header;// 设置位置数据ovrViewScaleDesc viewScaleDesc;viewScaleDesc.HmdSpaceToWorldScaleInMeters = 1.0f;viewScaleDesc.HmdToEyeViewOffset[0] = hmdToEyeViewOffset[0];viewScaleDesc.HmdToEyeViewOffset[1] = hmdToEyeViewOffset[1];ovrResult result = ovr_SubmitFrame(Hmd, 0, &viewScaleDesc, layerList, 2);

在把各层融合在一起之前,排序器分别执行它们的时间暂停、扭曲、色差校正。渲染四元层级到眼睛缓冲区的传统方法,包括2个过滤步骤(一次到眼睛的缓冲区,然后一次在扭曲的期间)。使用层,在该层的图像和最终帧缓冲区之间只有一个过滤步骤。这可以真正地改善文本质量,特别是将映射与ovrLayerFlag_HighQuality标志结合时。

层级现有的一个缺点是在最后的合成图像时没有后期处理可以执行,如柔焦效果、补光效果,或层数据Z交叉。这些效果可以在层的内容上进行,具有相似的虚拟效果。

调用ovr_SubmitFrame将要显示的层排成队列,将ovrSwapTextureSet 中的CurrentIndex纹理控制传送给排序器。重要的是要理解这些纹理在应用程序和排序器线程之间是共享的(而不是复制的)。而且在调用ovr_SubmitFrame 时,排序是不必然的,所以一定要小心。Oculus强烈建议应用程序不要尝试使用或渲染任何纹理和指数,它们将在最新ovr_SubmitFrame请求中被提交,例如:

// 创建2SwapTextureSets以举例说明。 每个都有2个纹理, [0]和 [1]。ovrSwapTextureSet *eyeSwapTextureSet;ovr_CreateSwapTextureSetD3D11 ( ... &eyeSwapTextureSet );ovrSwapTextureSet *hudSwapTextureSet;ovr_CreateSwapTextureSetD3D11 ( ... &hudSwapTextureSet );// 设置两个层ovrLayerEyeFov eyeLayer;ovrLayerEyeFov hudLayer;eyeLayer.Header.Type = ovrLayerType_EyeFov;eyeLayer...etc... // set up the rest of the data.hudLayer.Header.Type = ovrLayerType_Quad;hudLayer...etc... // set up the rest of the data.// 层的列表ovrLayerHeader *layerList[2];layerList[0] = &eyeLayer.Header;layerList[1] = &hudLayer.Header;// 现在(还没有对ovr_SubmitFrame调用) // eyeSwapTextureSet->Textures[0]: available// eyeSwapTextureSet->Textures[1]: available// hudSwapTextureSet->Textures[0]: available// hudSwapTextureSet->Textures[1]: available// 帧1eyeSwapTextureSet->CurrentIndex = 0;hudSwapTextureSet->CurrentIndex = 0;eyeLayer.ColorTexture[0] = eyeSwapTextureSet;eyeLayer.ColorTexture[1] = eyeSwapTextureSet;hudLayer.ColorTexture = hudSwapTextureSet;ovr_SubmitFrame(Hmd, 0, nullptr, layerList, 2);// 现在// eyeSwapTextureSet->Textures[0]: in use by compositor// eyeSwapTextureSet->Textures[1]: available// hudSwapTextureSet->Textures[0]: in use by compositor// hudSwapTextureSet->Textures[1]: available// 帧2eyeSwapTextureSet->CurrentIndex = 1;AppRenderScene ( eyeSwapTextureSet->Textures[1] );// 应用程序还没提供HUD,未变更层的设置ovr_SubmitFrame(Hmd, 0, nullptr, layerList, 2);// 现在// eyeSwapTextureSet->Textures[0]: available// eyeSwapTextureSet->Textures[1]: in use by compositor// hudSwapTextureSet->Textures[0]: in use by compositor// hudSwapTextureSet->Textures[1]: available// 帧3eyeSwapTextureSet->CurrentIndex = 0;AppRenderScene ( eyeSwapTextureSet->Textures[0] );// 应用程序隐藏HUDhudLayer.Header.Type = ovrLayerType_Disabled;ovr_SubmitFrame(Hmd, 0, nullptr, layerList, 2);// 现在// eyeSwapTextureSet->Textures[0]: in use by compositor// eyeSwapTextureSet->Textures[1]: available// hudSwapTextureSet->Textures[0]: available// hudSwapTextureSet->Textures[1]: available

换句话说,如果纹理是被最后的ovr_submitframe请求使用,不要试图渲染它。如果不是,就可以。


原文如下


Layers

Similar to the way a monitor view can be composed of multiple windows, the display on the headset can be composed of multiple layers. Typically at least one of these layers will be a view rendered from the user’s virtual eyeballs, but other layers may be HUD layers, information panels, text labels attached to items in the world, aiming reticles, and so on.

Each layer can have a different resolution, can use a different texture format, can use a different field of view or size, and might be in mono or stereo. The application can also be configured to not update a layer’s texture if the information in it has not changed. For example, it might not update if the text in an information panel has not changed since last frame or if the layer is a picture-in-picture view of a video stream with a low framerate. Applications can supply mipmapped textures to a layer and, together with a high-quality distortion mode, this is very effective at improving the readability of text panels.

Every frame, all active layers are composited from back to front using pre-multiplied alpha blending. Layer 0 is the furthest layer, layer 1 is on top of it, and so on; there is no depth-buffer intersection testing of layers, even if a depth-buffer is supplied.

A powerful feature of layers is that each can be a different resolution. This allows an application to scale to lower performance systems by dropping resolution on the main eye-buffer render that shows the virtual world, but keeping essential information, such as text or a map, in a different layer at a higher resolution.

There are several layer types available:

EyeFovThe standard “eye buffer” familiar from previous SDKs, which is typically a stereo view of a virtual scene rendered from the position of the user’s eyes. Although eye buffers can be mono, this can cause discomfort. Previous SDKs had an implicit field of view (FOV) and viewport; these are now supplied explicitly and the application can change them every frame, if desired. EyeFovDepthAn eye buffer render with depth buffer information. Currently, only layer #0 can be of this type.
Note: The depth buffer is not currently used for occlusion (Z testing)between layer types. QuadA monoscopic image that is displayed as a rectangle at a given pose and size in the virtual world. This is useful for heads-up-displays, text information, object labels and so on. By default the pose is specified relative to the user’s real-world space and the quad will remain fixed in space rather than moving with the user’s head or body motion. For head-locked quads, use the ovrLayerFlag_HeadLocked flag as described below. DirectDisplayed directly on the framebuffer, this is intended primarily for debugging. No timewarp, distortion or chromatic aberration is applied to this layer; images from this layer type will usually not look correct or comfortable while wearing the HMD. Disabledgnored by the compositor, disabled layers do not cost performance. We recommend that applications perform basic frustum-culling and disable layers that are out of view. However, there is no need for the application to repack the list of active layers tightly together when turning one layer off; disabling it and leaving it in the list is sufficient. Equivalently, the pointer to the layer in the list can be set to null.

Each layer style has a corresponding member of the ovrLayerType enum, and an associated structure holding the data required to display that layer. For example, the EyeFov layer is type number ovrLayerType_EyeFov and is described by the data in the structure ovrLayerEyeFov. These structures share a similar set of parameters, though not all layer types require all parameters:

Parameter Type Description Header.Type enum ovrLayerType Must be set by all layers to specify what type they are. Header.Flags A bitfield of ovrLayerFlags See below for more information. ColorTexture ovrSwapTextureSet Provides color and translucency data for the layer. Layers are blended over one another using premultiplied alpha. This allows them to express either lerp-style blending, additive blending, or a combination of the two. Layer textures must be RGBA or BGRA formats and might have mipmaps, but cannot be arrays, cubes, or have MSAA. If the application desires to do MSAA rendering, then it must resolve the intermediate MSAA color texture into the layer’s non-MSAA ColorTexture. DepthTexture ovrSwapTextureSet Provides depth data for the EyeFovDepth layer type, and is used by positional timewarp to try to apply the correct parallax for the layer. This data is not used for occlusion or intersection with other layers. It does not have to match the ColorTexture resolution, and 2x or 4x MSAA is allowed. ProjectionDesc ovrTimewarp
ProjectionDesc Supplies information about how to interpret the data held in DepthTexture for the EyeFovDepth layer type. This should be extracted from the application’s projection matrix using the ovrTimewarpProjectionDesc _FromProjection utility function. Viewport ovrRecti The rectangle of the texture that is actually used, specified in 0-1 texture “UV” coordinate space (not pixels). In theory, texture data outside this region is not visible in the layer. However, the usual caveats about texture sampling apply, especially with mipmapped textures. It is good practice to leave a border of RGBA(0,0,0,0) pixels around the displayed region to avoid “bleeding,” especially between two eye buffers packed side by side into the same texture. The size of the border depends on the exact usage case, but around 8 pixels seems to work well in most cases. Fov ovrFovPort The field of view used to render the scene in an Eye layer type. Note this does not control the HMD’s display, it simply tells the compositor what FOV was used to render the texture data in the layer - the compositor will then adjust appropriately to whatever the actual user’s FOV is. Applications may change FOV dynamically for special effects. Reducing FOV may also help with performance on slower machines, though typically it is more effective to reduce resolution before reducing FOV. RenderPose ovrPosef The camera pose the application used to render the scene in an Eye layer type. This is typically predicted by the SDK and application using the ovr_GetTrackingState and ovr_CalcEyePoses functions. The difference between this pose and the actual pose of the eye at display time is used by the compositor to apply timewarp to the layer. SensorSampleTime double The absolute time when the application sampled the tracking state. The typical way to acquire this value is to have an ovr_GetTimeInSeconds call right next to the ovr_GetTrackingState call. The SDK uses this value to report the application’s motion-to-photon latency in the Performance HUD. If the application has more than one ovrLayerType_EyeFov layer submitted at any given frame, the SDK scrubs through those layers and selects the timing with the lowest latency. In a given frame, if no ovrLayerType_EyeFov layers are submitted, the SDK will use the point in time when ovr_GetTrackingState was called with the latencyMarkerset to ovrTrue as the substitute application motion-to-photon latency time. QuadPoseCenter ovrPosef Specifies the orientation and position of the center point of a Quad layer type. The supplied direction is the vector perpendicular to the quad. The position is in real-world meters (not the application’s virtual world, the actual world the user is in) and is relative to the “zero” position set by ovr_RecenterPose unless the ovrLayerFlag_HeadLocked flag is used. QuadSize ovrVector2f Specifies the width and height of a Quad layer type. As with position, this is in real-world meters.

Layers that take stereo information (all those except Quad layer types) take two sets of most parameters, and these can be used in three different ways:

  • Stereo data, separate textures—the app supplies a different ovrSwapTextureSet for the left and right eyes, and a viewport for each.
  • Stereo data, shared texture—the app supplies the same ovrSwapTextureSet for both left and right eyes, but a different viewport for each. This allows the application to render both left and right views to the same texture buffer. Remember to add a small buffer between the two views to prevent “bleeding”, as discussed above.
  • Mono data—the app supplies the same ovrSwapTextureSet for both left and right eyes, and the same viewport for each.

Texture and viewport sizes may be different for the left and right eyes, and each can even have different fields of view. However beware of causing stereo disparity and discomfort in your users.

The Header.Flags field available for all layers is a logical-or of the following:

  • ovrLayerFlag_HighQuality—enables a slightly more expensive but higher-quality path in the compositor for this layer. This can provide a significant increase in legibility, especially when used with a texture with mipmaps; this is recommended for high-frequency images such as text or diagrams and when used with the Quad layer types. It has relatively little visual effect on the Eye layer types with typical virtual world images.
  • ovrLayerFlag_TextureOriginAtBottomLeft—the origin of a layer’s texture is assumed to be at the top-left corner. However, some engines (particularly those using OpenGL) prefer to use the bottom-left corner as the origin, and they should use this flag.
  • ovrLayerFlag_HeadLocked—Most layer types have their pose orientation and position specified relative to the “zero position” defined by calling ovr_RecenterPose. However the app may wish to specify a layer’s pose relative to the user’s face. When the user moves their head, the layer follows. This is useful for reticles used in gaze-based aiming or selection. This flag may be used for all layer types, though it has no effect when used on the Direct type.

At the end of each frame, after rendering to whichever ovrSwapTextureSet the application wants to update, the data for each layer is put into the relevant ovrLayerEyeFov / ovrLayerEyeFovDepth / ovrLayerQuad / ovrLayerDirect structure. The application then creates a list of pointers to those layer structures, specifically to the Header field which is guaranteed to be the first member of each structure. Then the application builds a ovrViewScaleDesc struct with the required data, and calls the ovr_SubmitFrame function.

// Create eye layer.ovrLayerEyeFov eyeLayer;eyeLayer.Header.Type = ovrLayerType_EyeFov;eyeLayer.Header.Flags = 0;for ( int eye = 0; eye < 2; eye++ ){ eyeLayer.ColorTexture[eye] = EyeBufferSet[eye]; eyeLayer.Viewport[eye] = EyeViewport[eye]; eyeLayer.Fov[eye] = EyeFov[eye]; eyeLayer.RenderPose[eye] = EyePose[eye];}// Create HUD layer, fixed to the player's torsoovrLayerQuad hudLayer;hudLayer.Header.Type = ovrLayerType_Quad;hudLayer.Header.Flags = ovrLayerFlag_HighQuality;hudLayer.ColorTexture = TheHudTextureSet;// 50cm in front and 20cm down from the player's nose,// fixed relative to their torso.hudLayer.QuadPoseCenter.Position.x = 0.00f;hudLayer.QuadPoseCenter.Position.y = -0.20f;hudLayer.QuadPoseCenter.Position.z = -0.50f;hudLayer.QuadPoseCenter.Orientation = ovrQuatf();// HUD is 50cm wide, 30cm tall.hudLayer.QuadSize.x = 0.50f;hudLayer.QuadSize.y = 0.30f;// Display all of the HUD texture.hudLayer.Viewport.Pos.x = 0.0f;hudLayer.Viewport.Pos.y = 0.0f;hudLayer.Viewport.Size.w = 1.0f;hudLayer.Viewport.Size.h = 1.0f;// The list of layers.ovrLayerHeader *layerList[2];layerList[0] = &eyeLayer.Header;layerList[1] = &hudLayer.Header;// Set up positional data.ovrViewScaleDesc viewScaleDesc;viewScaleDesc.HmdSpaceToWorldScaleInMeters = 1.0f;viewScaleDesc.HmdToEyeViewOffset[0] = hmdToEyeViewOffset[0];viewScaleDesc.HmdToEyeViewOffset[1] = hmdToEyeViewOffset[1];ovrResult result = ovr_SubmitFrame(Hmd, 0, &viewScaleDesc, layerList, 2);

The compositor performs timewarp, distortion, and chromatic aberration correction on each layer separately before blending them together. The traditional method of rendering a quad to the eye buffer involves two filtering steps (once to the eye buffer, then once during distortion). Using layers, there is only a single filtering step between the layer image and the final framebuffer. This can provide a substantial improvement in text quality, especially when combined with mipmaps and the ovrLayerFlag_HighQuality flag.

One current disadvantage of layers is that no post-processing can be performed on the final composited image, such as soft-focus effects, light-bloom effects, or the Z intersection of layer data. Some of these effects can be performed on the contents of the layer with similar visual results.

Calling ovr_SubmitFrame queues the layers for display, and transfers control of the CurrentIndex texture inside the ovrSwapTextureSet to the compositor. It is important to understand that these textures are being shared (rather than copied) between the application and the compositor threads, and that composition does not necessarily happen at the time ovr_SubmitFrame is called, so care must be taken. Oculus strongly recommends that the application should not try to use or render to any of the textures and indices that were submitted in the most recent ovr_SubmitFrame call. For example:

// Create two SwapTextureSets to illustrate. Each will have two textures, [0] and [1].ovrSwapTextureSet *eyeSwapTextureSet;ovr_CreateSwapTextureSetD3D11 ( ... &eyeSwapTextureSet );ovrSwapTextureSet *hudSwapTextureSet;ovr_CreateSwapTextureSetD3D11 ( ... &hudSwapTextureSet );// Set up two layers.ovrLayerEyeFov eyeLayer;ovrLayerEyeFov hudLayer;eyeLayer.Header.Type = ovrLayerType_EyeFov;eyeLayer...etc... // set up the rest of the data.hudLayer.Header.Type = ovrLayerType_Quad;hudLayer...etc... // set up the rest of the data.// the list of layersovrLayerHeader *layerList[2];layerList[0] = &eyeLayer.Header;layerList[1] = &hudLayer.Header;// Right now (no calls to ovr_SubmitFrame done yet)// eyeSwapTextureSet->Textures[0]: available// eyeSwapTextureSet->Textures[1]: available// hudSwapTextureSet->Textures[0]: available// hudSwapTextureSet->Textures[1]: available// Frame 1.eyeSwapTextureSet->CurrentIndex = 0;hudSwapTextureSet->CurrentIndex = 0;eyeLayer.ColorTexture[0] = eyeSwapTextureSet;eyeLayer.ColorTexture[1] = eyeSwapTextureSet;hudLayer.ColorTexture = hudSwapTextureSet;ovr_SubmitFrame(Hmd, 0, nullptr, layerList, 2);// Now,// eyeSwapTextureSet->Textures[0]: in use by compositor// eyeSwapTextureSet->Textures[1]: available// hudSwapTextureSet->Textures[0]: in use by compositor// hudSwapTextureSet->Textures[1]: available// Frame 2.eyeSwapTextureSet->CurrentIndex = 1;AppRenderScene ( eyeSwapTextureSet->Textures[1] );// App does not render to the HUD, does not change the layer setup.ovr_SubmitFrame(Hmd, 0, nullptr, layerList, 2);// Now,// eyeSwapTextureSet->Textures[0]: available// eyeSwapTextureSet->Textures[1]: in use by compositor// hudSwapTextureSet->Textures[0]: in use by compositor// hudSwapTextureSet->Textures[1]: available// Frame 3.eyeSwapTextureSet->CurrentIndex = 0;AppRenderScene ( eyeSwapTextureSet->Textures[0] );// App hides the HUDhudLayer.Header.Type = ovrLayerType_Disabled;ovr_SubmitFrame(Hmd, 0, nullptr, layerList, 2);// Now,// eyeSwapTextureSet->Textures[0]: in use by compositor// eyeSwapTextureSet->Textures[1]: available// hudSwapTextureSet->Textures[0]: available// hudSwapTextureSet->Textures[1]: available

In other words, if the texture was used by the last ovr_SubmitFrame call, don’t try to render to it. If it wasn’t, you can.

阅读全文
0 0