VR系列——Oculus Rift 开发者指南:三、Oculus Rift的渲染(六)

来源:互联网 发布:阿里云 域名 cname 编辑:程序博客网 时间:2024/05/17 00:07

不同线程上的渲染

在某些引擎中,渲染分布在多个线程中处理。

例如,线程一为每个场景中的目标进行挑选和渲染设置(我们称之为“主”线程),线程二创建D3D或OpenGL API调用(我们称之为“渲染”线程)。这两个线程都可能需要精确的帧展示时间估算值,以便计算出最恰当的头部动作预测。

该方法的异步特性带来一个挑战:当渲染进程还在为某个帧进行渲染时,主线程可能已经开始处理下一帧了。并行的帧处理可能会导致某一帧或某一帧的部分处理不同步,这取决于游戏引擎如何设计。如果我们使用默认的全局标识来读取帧计时,GetPredictedDisplayTime返回的结果可能被另一帧覆盖,这取决于哪个线程被函数调用;更糟的是,线程执行顺序可能会出现随机错误的返回值。为了解决这个问题,上文中已经介绍了frameIndex参数,它能够被应用程序追踪,并将帧数据传递给线程。

为了让多线程的渲染结果正确,必须满足以下几点:(a)不论访问的是哪个线程,同一帧的帧计时的动作预测和计算必须一致;(b)真正用于渲染的眼部动作必须要和帧索引一起传给ovr_SubmitFrame。

你可通过下列步骤摘要进行确认:

  1. 在开始渲染之前,主线程要为当前帧分配一个帧索引。它可以是每一帧的增量索引,并将其传给GetPredictedDisplayTime,为动作预测操作获取正确的时间。
  2. 主线程调用线程安全函数ovr_GetTrackingState时需传入预测时间值。若需要渲染设置也可调用ovr_CalcEyePoses。
  3. 主线程要给渲染线程传入当前帧索引和眼部动作,以及其他必需的渲染指令和帧数据。
  4. 当渲染线程在执行渲染指令时,开发人员应确保以下两点:
    a. 帧渲染所需的真实动作已存入图层的RenderPose中。
    b. 主线程中用到frameIndex的相同值已存入ovr_SubmitFrame中。

下列代码阐明了更多细节:

void MainThreadProcessing(){frameIndex++;// 当帧要展现时,请求API获取时间double frameTiming = GetPredictedDisplayTime(session, frameIndex);// 获取一致的预测动作状态ovrTrackingState state = ovr_GetTrackingState(session, frameTiming, ovrTrue);ovrPosef eyePoses[2];ovr_CalcEyePoses(state.HeadPose.ThePose, hmdToEyeViewOffset, eyePoses);SetFrameHMDData(frameIndex, eyePoses);// 为帧渲染做预处理...}void RenderThreadProcessing(){int frameIndex;ovrPosef eyePoses[2];GetFrameHMDData(&frameIndex, eyePoses);layer.RenderPose[0] = eyePoses[0];layer.RenderPose[1] = eyePoses[1];// 执行真正的图像渲染操作...// 提交单图层帧数据ovrLayerHeader* layers = &layer.Header;ovrResult result = ovr_SubmitFrame(session, frameIndex, nullptr, &layers, 1);}

原文如下


Rendering on Different Threads

In some engines, render processing is distributed across more than one thread.

For example, one thread may perform culling and render setup for each object in the scene (we’ll call this the “main” thread), while a second thread makes the actual D3D or OpenGL API calls (we’ll call this the “render” thread). Both of these threads may need accurate estimates of frame display time, so as to compute best possible predictions of head pose.

The asynchronous nature of this approach makes this challenging: while the render thread is rendering a frame, the main thread might be processing the next frame. This parallel frame processing may be out of sync by exactly one frame or a fraction of a frame, depending on game engine design. If we used the default global state to access frame timing, the result of GetPredictedDisplayTime could either be off by one frame depending which thread the function is called from, or worse, could be randomly incorrect depending on how threads are scheduled. To address this issue, previous section introduced the concept of frameIndex that is tracked by the application and passed across threads along with frame data.

For multi-threaded rendering result to be correct, the following must be true: (a) pose prediction, computed based on frame timing, must be consistent for the same frame regardless of which thread it is accessed from; and (b) eye poses that were actually used for rendering must be passed into ovr_SubmitFrame, along with the frame index.

Here is a summary of steps you can take to ensure this is the case:

  1. The main thread needs to assign a frame index to the current frame being processed for rendering. It would increment this index each frame and pass it to GetPredictedDisplayTime to obtain the correct timing for pose prediction.
  2. The main thread should call the thread safe function ovr_GetTrackingState with the predicted time value. It can also call ovr_CalcEyePoses if necessary for rendering setup.
  3. Main thread needs to pass the current frame index and eye poses to the render thread, along with any rendering commands or frame data it needs.
  4. When the rendering commands executed on the render thread, developers need to make sure these things hold:
    a. The actual poses used for frame rendering are stored into the RenderPose for the layer.
    b. The same value of frameIndex as was used on the main thead is passed into ovr_SubmitFrame.

The following code illustrates this in more detail:

void MainThreadProcessing(){ frameIndex++; // Ask the API for the times when this frame is expected to be displayed. double frameTiming = GetPredictedDisplayTime(session, frameIndex); // Get the corresponding predicted pose state. ovrTrackingState state = ovr_GetTrackingState(session, frameTiming, ovrTrue); ovrPosef eyePoses[2]; ovr_CalcEyePoses(state.HeadPose.ThePose, hmdToEyeViewOffset, eyePoses); SetFrameHMDData(frameIndex, eyePoses); // Do render pre-processing for this frame. ...}void RenderThreadProcessing(){ int frameIndex; ovrPosef eyePoses[2]; GetFrameHMDData(&frameIndex, eyePoses); layer.RenderPose[0] = eyePoses[0]; layer.RenderPose[1] = eyePoses[1]; // Execute actual rendering to eye textures. ... // Submit frame with one layer we have. ovrLayerHeader* layers = &layer.Header; ovrResult result = ovr_SubmitFrame(session, frameIndex, nullptr, &layers, 1);}
阅读全文
0 0
原创粉丝点击