计算机英语翻译--题目一

来源:互联网 发布:淘宝床垫 编辑:程序博客网 时间:2024/04/29 19:04

OpenGL is an industry-standard, cross-platform APPLICATION PROGRAMMING INTERFACE (API). The specification for this API was finalized in 1992, and the first implementations appeared in 1993. It was largely compatible with a proprietary API called Iris GL (Graphics Library) that was designed and supported by Silicon Graphics, Inc. To establish an industry standard, Silicon Graphics collaborated with various other graphics hardware companies to create an open standard, which was dubbed "OpenGL."

The evolution of OpenGL is controlled by the OpenGL Architecture Review Board, or ARB, created by Silicon Graphics in 1992. This group is governed by a set of by-laws, and its primary task is to guide OpenGL by controlling the specification and conformance tests. The original ARB contained representatives from SGI, Intel, Microsoft, Compaq, Digital Equipment Corporation, Evans & Sutherland, and IBM. The ARB currently has as members 3Dlabs, Apple, ATI, Dell, IBM, Intel, NVIDIA, SGI, and Sun Microsystems.

OpenGL shares many of Iris GL's design characteristics. Its intention is to provide access to graphics hardware capabilities at the lowest possible level that still provides hardware independence. It is designed to be the lowestlevel interface for accessing graphics hardware. OpenGL has been implemented in a variety of operating environments, including Macs, PCs, and UNIX-based systems. It has been supported on a variety of hardware architectures, from those that support little in hardware other than the frame buffer itself to those that accelerate virtually everything in hardware.

Since the release of the initial OpenGL specification (version 1.0) in June 1992, six revisions have added new functionality to the API. The current version of the OpenGL specification is 2.0. The first conformant implementations of OpenGL 1.0 began appearing in 1993.

·         Version 1.1 was finished in 1997 and added support for two important capabilitiesvertex arrays and texture objects.

·         The specification for OpenGL 1.2 was released in 1998 and added support for 3D textures and an optional set of imaging functionality.

·         The OpenGL 1.3 specification was completed in 2001 and added support for cube map textures, compressed textures, multitextures, and other things.

·         OpenGL 1.4 was completed in 2002 and added automatic mipmap generation, additional blending functions, internal texture formats for storing depth values for use in shadow computations, support for drawing multiple vertex arrays with a single command, more control over point rasterization, control over stencil wrapping behavior, and various additions to texturing capabilities.

·         The OpenGL 1.5 specification was published in October 2003. It added support for vertex buffer objects, shadow comparison functions, occlusion queries, and nonpower-of-2 textures.

All versions of OpenGL through 1.5 were based on a fixed-function pipelinethe user could control various parameters, but the underlying functionality and order of processing were fixed. OpenGL 2.0, finalized in September 2004, opened up the processing pipeline for user control by providing programmability for both vertex processing and fragment processing as part of the core OpenGL specification. With this version of OpenGL, application developers have been able to implement their own rendering algorithms, using a high-level shading language. The addition of programmability to OpenGL represents a fundamental shift in its design, hence the change to version number 2.0 from 1.5. However, the change to the major version number does not represent any loss of compatibility with previous versions of OpenGL. OpenGL 2.0 is completely backward compatible with OpenGL 1.5applications that run on OpenGL 1.5 can run unmodified on OpenGL 2.0. Other features added in 2.0 include support for multiple render targets (rendering to multiple buffers simultaneously), nonpower-of-2 textures (thus easing the restriction that textures must always be a power of 2 in each dimension), point sprites (screen-aligned textured quadrilaterals that are drawn with the point primitive), and separate stencil functionality for front- and back-facing surfaces.

Because of its fundamental design as a fixed function state machine, before OpenGL 2.0, the only way to modify OpenGL was to define extensions to it. Therefore, a great deal of functionality is available in various OpenGL implementations in the form of extensions that expose new hardware functionality. OpenGL has a well-defined extension mechanism, and hardware vendors are free to define and implement features that expose new hardware functionality. Since only OpenGL implementors can implement extensions, there was previously no way for applications to extend the functionality of OpenGL beyond what was provided by their OpenGL provider.

To date, close to 300 extensions have been defined. Extensions that are supported by only one vendor are identified by a short prefix unique to that vendor (e.g., SGI for extensions developed by Silicon Graphics, Inc.). Extensions that are supported by more than one vendor are denoted by the prefix EXT in the extension name. Extensions that have been thoroughly reviewed by the ARB are designated with an ARB prefix in the extension name to indicate that they have a special status as a recommended way of exposing a certain piece of functionality. Extensions that achieve the ARB designation are candidates to be added to standard OpenGL. Published specifications for OpenGL extensions are available at the OpenGL extension registry at http://oss.sgi.com/projects/ogl-sample/registry.

The extensions supported by a particular OpenGL implementation can be determined by calling the OpenGL glGetString function with the symbolic constant GL_EXTENSIONS. The returned string contains a list of all the extensions supported in the implementation, and some vendors currently support close to 100 separate OpenGL extensions. It can be a little bit daunting for an application to try and determine whether the needed extensions are present on a variety of implementations, and what to do if they're not. The proliferation of extensions has been primarily a positive factor for the development of OpenGL, but in a sense, it has become a victim of its own success. It allows hardware vendors to expose new features easily, but it presents application developers with a dizzying array of nonstandard options. Like any standards body, the ARB is cautious about promoting functionality from extension status to standard OpenGL.

Before version 2.0 of OpenGL, none of the underlying programmability of graphics hardware was exposed. The original designers of OpenGL, Mark Segal and Kurt Akeley, stated, "One reason for this decision is that, for performance reasons, graphics hardware is usually designed to apply certain operations in a specific order; replacing these operations with arbitrary algorithms is usually infeasible." This statement may have been mostly true when it was written in 1994 (there were programmable graphics architectures even then). But today, all of the graphics hardware that is being produced is programmable. Because of the proliferation of OpenGL extensions and the need to support Microsoft's DirectX API, hardware vendors have no choice but to design programmable graphics architectures. As discussed in the remaining chapters of this book, providing application programmers with access to this programmability is the purpose of the OpenGL Shading Language.

The OpenGL API is focused on drawing graphics into frame buffer memory and, to a lesser extent, in reading back values stored in that frame buffer. It is somewhat unique in that its design includes support for drawing threedimensional geometry (such as points, lines, and polygons, collectively referred to as PRIMITIVES) as well as for drawing images and bitmaps.

The execution model for OpenGL can be described as client-server. An application program (the client) issues OpenGL commands that are interpreted and processed by an OpenGL implementation (the server). The application program and the OpenGL implementation can execute on a single computer or on two different computers. Some OpenGL state is stored in the address space of the application (client state), but the majority of it is stored in the address space of the OpenGL implementation (server state).

OpenGL commands are always processed in the order in which they are received by the server, although command completion may be delayed due to intermediate operations that cause OpenGL commands to be buffered. Out-of-order execution of OpenGL commands is not permitted. This means, for example, that a primitive will not be drawn until the previous primitive has been completely drawn. This in-order execution also applies to queries of state and frame buffer read operations. These commands return results that are consistent with complete execution of all previous commands.

Data binding for OpenGL occurs when commands are issued, not when they are executed. Data passed to an OpenGL command is interpreted when the command is issued and copied into OpenGL memory if needed. Subsequent changes to this data by the application have no effect on the data that is now stored within OpenGL.

OpenGL is an API for drawing graphics, and so the fundamental purpose for OpenGL is to transform data provided by an application into something that is visible on the display screen. This processing is often referred to as RENDERING. Typically, this processing is accelerated by specially designed hardware, but some or all operations of the OpenGL pipeline can be performed by a software implementation running on the CPU. It is transparent to the user of the OpenGL implementation how this division among the software and hardware is handled. The important thing is that the results of rendering conform to the results defined by the OpenGL specification.

The hardware that is dedicated to drawing graphics and maintaining the contents of the display screen is often called the GRAPHICS ACCELERATOR. Graphics accelerators typically have a region of memory that is dedicated to maintaining the contents of the display. Every visible picture element (pixel) of the display is represented by one or more bytes of memory on the graphics accelerator. A grayscale display might have a byte of memory to represent the gray level at each pixel. A color display might have a byte of memory for each of red, green, and blue in order to represent the color value for each pixel. This so-called DISPLAY MEMORY is scanned (refreshed) a certain number of times per second in order to maintain a flicker-free representation on the display. Graphics accelerators also typically have a region of memory called OFFSCREEN MEMORY that is not displayable and is used to store things that aren't visible.

OpenGL assumes that allocation of display memory and offscreen memory is handled by the window system. The window system decides which portions of memory may be accessed by OpenGL and how these portions are structured. In each environment in which OpenGL is supported, a small set of function calls tie OpenGL into that particular environment. In the Microsoft Windows environment, this set of routines is called WGL (pronounced "wiggle"). In the X Window System environment, this set of routines is called GLX. In the Macintosh environment, this set of routines is called AGL. In each environment, this set of calls supports such things as allocating and deallocating regions of graphics memory, allocating and deallocating data structures called GRAPHICS CONTEXTS that maintain OpenGL state, selecting the current graphics context, selecting the region of graphics memory in which to draw, and synchronizing commands between OpenGL and the window system.

The region of graphics memory that is modified as a result of OpenGL rendering is called the FRAME BUFFER. In a windowing system, the OpenGL notion of a frame buffer corresponds to a window. Facilities in window-system-specific OpenGL routines let users select the frame buffer characteristics for the window. The windowing system typically also clarifies how the OpenGL frame buffer behaves when windows overlap. In a nonwindowed system, the OpenGL frame buffer corresponds to the entire display.

A window that supports OpenGL rendering (i.e., a frame buffer) may consist of some combination of the following:

·         Up to four color buffers

·         A depth buffer

·         A stencil buffer

·         An accumulation buffer

·         A multisample buffer

·         One or more auxiliary buffers

Most graphics hardware supports both a front buffer and a back buffer in order to perform DOUBLE BUFFERING. This allows the application to render into the (offscreen) back buffer while displaying the (visible) front buffer. When rendering is complete, the two buffers are swapped so that the completed rendering is now displayed as the front buffer and rendering can begin anew in the back buffer. When double buffering is used, the end user never sees the graphics when they are in the process of being drawn, only the finished image. This technique allows smooth animation at interactive rates.

Stereo viewing is supported by having a color buffer for the left eye and one for the right eye. Double buffering is supported by having both a front and a back buffer. A double-buffered stereo window will therefore have four color buffers: front left, front right, back left, and back right. A normal (nonstereo) double-buffered window will have a front buffer and a back buffer. A single-buffered window will have only a front buffer.

If 3D objects are to be drawn with hidden-surface removal, a DEPTH BUFFER is needed. This buffer stores the depth of the displayed object at each pixel. As additional objects are drawn, a depth comparison can be performed at each pixel to determine whether the new object is visible or obscured.

A STENCIL BUFFER is used for complex masking operations. A complex shape can be stored in the stencil buffer, and subsequent drawing operations can use the contents of the stencil buffer to determine whether to update each pixel.

The ACCUMULATION BUFFER is a color buffer that typically has higherprecision components than the color buffers. Several images can thus be accumulated to produce a composite image. One use of this capability would be to draw several frames of an object in motion into the accumulation buffer. When each pixel of the accumulation buffer is divided by the number of frames, the result is a final image that shows motion blur for the moving objects. Similar techniques can be used to simulate depth-of-field effects and to perform high-quality full-screen antialiasing.

Normally, when objects are drawn, a single decision is made as to whether the graphics primitive affects a pixel on the screen. The MULTISAMPLE BUFFER is a buffer that allows everything that is rendered to be sampled multiple times within each pixel in order to perform high-quality full-screen antialiasing without rendering the scene more than once. Each sample within a pixel contains color, depth, and stencil information, and the number of samples per pixel can be queried. When a window includes a multisample buffer, it does not include separate depth or stencil buffers. As objects are rendered, the color samples are combined to produce a single color value, and that color value is passed on to be written into the color buffer. Because multisample buffers contain multiple samples (often 4, 8, or 16) of color, depth, and stencil for every pixel in the window, they can use up large amounts of offscreen graphics memory.

AUXILIARY BUFFERS are offscreen memory buffers that can store arbitrary data such as intermediate results from a multipass rendering algorithm. A frame buffer may have 1, 2, 3, 4, or even more associated auxiliary buffers.