/LGC窗口管理/RenderingProject/CMArchitecture

来源:互联网 发布:阿里云幕布的申请 编辑:程序博客网 时间:2024/05/21 01:49

RenderingProject/CMArchitecture

From FedoraProject

< RenderingProject
Jump to: navigation, search

Contents

[hide]
  • 1 Architectures for a Compositing Manager
    • 1.1 1. Compositing manager on top of RENDER
      • 1.1.1 1.a. RENDER acceleration written directly to hardware
      • 1.1.2 1.b. RENDER acceleration written on top of GL in the server.
      • 1.1.3 1.b.i. GL in the server done with a GL-based server
      • 1.1.4 1.b.ii. GL in the server done incrementally
    • 1.2 2. Compositing manager on top of OpenGL
      • 1.2.1 2.a. Compositing manager renders to separate server
      • 1.2.2 2.b. Compositing manager renders to same server
      • 1.2.3 2.b.i. Compositing manager uses indirect rendering
      • 1.2.4 2.b.ii. Compositing manager uses direct rendering

Architectures for a Compositing Manager

There are various ways that we can implement an X compositing manager; thisdocument tries to sketch out the tradeoffs involved.

1. Compositing manager on top of RENDER

Using RENDER to implement a compositing manager has the advantage thatRENDER works directly with the objects that the compositing manager ismanipulating; pixmaps and windows.

But it has disadvantages; first, the RENDER API does not expose thefull power of modern graphics cards. If you want to say, use pixelshaders, in your compositing manager, you are out of luck.

Second, to get decent performance, we need hardware acceleration. Thebasic problem with hardware acceleration of RENDER is that it doesn'talways match the hardware that well; in particular, for filteredscaling, hardware is mipmap based, a concept not exposed in RENDER.

Hardware accelerating RENDER to get good performance for the drawing we do within windows is not challenging, but getting good performancefor full-window and full-desktop operations may be harder.

1.a. RENDER acceleration written directly to hardware

The first approach to accelerating RENDER is what we do currently;directly program the hardware. At some level, this allows optimumperformance, but involves duplicating work that's being donein the 3D drivers; which have a much more active developmentcommunity. RENDER is also an unknown quantity to hardware vendorsso we're unlikely to get good support in closed-source orvendor-written drivers. Using a known API like OpenGL to definethe hardware interaction would give us much more of a commonlanguage with hardware vendors.

1.b. RENDER acceleration written on top of GL in the server.

The other approach is to implement RENDER on top of GL. Since GL is apretty close match to the hardware, we shouldn't lose a lot ofefficiency doing this, and features such as pixel shaders shouldeventually allow for very high-powered implementations of RENDERcompositing. A start of this work has been done by Dave Reveman forthe 'glitz' library.

1.b.i. GL in the server done with a GL-based server

If we are accelerating RENDER in the server with GL, we clearly needGL in the server. We could do this by running the entire server as aGL client, on top of something like mesa-solo, or nested on top of the existing server like Xgl. Both have their disadvantages; mesa-solo is far from usable for hosting an X server, and nested X server still requires that the "backend" X server be maintained.

1.b.ii. GL in the server done incrementally

An alternative approach would to be to start by implementing indirectrendering; once we we had that, we'd have DRI drivers runninginside the server process. It would be conceivable to use thoseDRI drivers to implement parts of the 2D rendering stack whilekeeping all the video card initialization, Xvideo implemenation,and so forth the same. This work has been started in the accel_indirect_glx branch of Xorg and is making good progress.

2. Compositing manager on top of OpenGL

2.a. Compositing manager renders to separate server

The way that luminocity works is that it runs a headless "Xfake"server (all software), sucks the window contents off of that serverthen displays them on an output server. Input events are forward theother way from the output server to the headless server.

This model has a great deal of simplicity, because the CM can simplybe a normal direct rendering client. And it doesn't perform thatbadly; rendering within windows simply isn't the limiting factor fornormal desktop apps. Performance could be further optimized by runningthe CM in the same process as the headless server (imagine alibxserver).

The forwarding of input events is pretty difficult, and having toextend that for XKB, for Xinput, for Xrandr, and so forth would bepainful, though doable. The real killer of this model is Xvideo and 3Dapplications. There's no way that we can get reasonable performancefor such applications without having them talking to the outputserver.

2.b. Compositing manager renders to same server

The more reasonable final model for a GL-based compositing manager isthat we use a single X server. Application rendering, whether classicX, GL, or Xvideo is redirected to offscreen pixmaps. Then the aGL based compositing manager transfers the contents of thosepixmaps to textures and renders to the screen.

2.b.i. Compositing manager uses indirect rendering

We normally think that direct rendering is always better than indirectrendering. However, this may not be the case for the compositingmanager. The source data that we are using is largely on the X server;for a direct rendering client to copy a pixmap onto a texturebasically requires an XGetImage, meaning copying the data from the server to the client. So, the right approach may to be usean indirect rendering client, which could simply manipulate textures as opaque objects without ever needing to touch the texture data directly.

To do the work of rendering window pixmaps to the screen, we couldsimpy generate temporary textures and use glCopySubImage to copyfrom the composite-redirected pixmap into the texture.

A different approach would be to have a GL extension similar topbuffers where a pixmap is "bound" to a texture. Would it be aproblem if the application could write to the window (and thus pixmap)while the compositing manager was working? The pbuffer spec prohibitschanging pbuffer contents while a pbuffer is bound to a texture. Italso likely involves copying in any case, since textures havedifferent formats and restrictions than pixmaps. Avoiding newGL extensions is definitely a plus in any case. So, perhapsthe simple approach with the copy is the right one.

Automatic mipmap generation would have to be implemented for the DRIto make filtered scaling down efficient, since sucking the image backto the client to generate mipmaps would be horribly inefficient.

We might want a way to specify that the textures we are creating toshadow windows are temporary and can be discarded. pbuffers or the(draft) superbuffer specification may allow expressing the rightsemantics.

Update: the GLX_EXT_texture_from_pixmap spec addresses a few of these issues. It is only defined for pixmaps, not windows, but since the Composite extension exposes the redirected image as a pixmap, this works. It specifies a new call, glXBindTexImageEXT, that is required to act like glTexImage2D, ie, it must act like a copy when it is executed. It also addresses the mipmapping issue by allowing the server to indicate whether it can generate mipmaps for bound pixmaps automatically. Finally, the expected usage model is that textures are bound to pixmaps for a single invocation and then unbound; the implementation may track updates between binds as an optimization, potentially eliminating copies.

While this does introduce some additional complexity due to the new extension, it does provide all the information necessary to enable a direct-rendering compositing manager (see below), again with a natural transition from older to newer stacks.

2.b.ii. Compositing manager uses direct rendering

A sophisticated implementation of direct rendering could perhaps makecopying from a pixmap onto a texture more efficient thanXGetImage. Right now the DRI relies heavily on having aapplication-memory copy of all textures, but it is conceivable that itcould be extended to allow textures that are "locked" into videomemory. With such a texture, you could imagine the server taking careof copying from pixmap to texture, even if rendering with the texturewas being done by the client.

Still, there seems to be an inevitable high synchronization overheadfor such a system; the client and server are doing a complex dance toget the rendering done.

原创粉丝点击