shading Language glsl cg 3d程序员的黑话

来源:互联网 发布:mac me664 换主板 编辑:程序博客网 时间:2024/05/01 10:52

glsl  3d程序员的黑话,没错,要是不理解实时3d的渲染流程,那些代码片段就跟密码一样,那么短就做了那么多事情,因为使用了 quantefier完全对数据进行了抽象,使得渲染完全就是数学表达式了,这样,这个3d渲染的世界现在是太精彩了但是,也相当难,简直让很多人管窥的机会都木有了.

简单来讲
glsl 或者是 cg 都好就是 做了两件事情:
一 不用写for循环了
二 把对glBegin glEnd里的 vertex 跟 texture 严格分开了 然后用 插值联系起来
然后...你甚至能把 做ai的矩阵当作 texture放进去..然后反正是对每个点都做的...就把逻辑都做一遍..然后..也不放到最后的 buffer里而是写回到texture里.... 这样的话..有多少线都不够用啦 数据库应用也拿进来算了  网络包压缩啊
zip啊  codec也拿进来算了 GPU = General Processing Unit!
 不过..与现在能进行的 渲染相比..那些个应用都不值得一提了...

关键性摘录:

//It is executed once for each vertex.

 

//It is executed once for each vertex.

// attribute qualified variables are typically changed per vertex
attribute float VertexTemp;

    /*
       The vertex position written in the application using
       glVertex() can be read from the built-in variable
       gl_Vertex. Use this value and the current model
       view transformation matrix to tell the rasterizer where
       this vertex is.
    */
    //在 glBegin glEnd 之间使用的 glVertex Call
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

//即varying quantifier 是对fragment per pixel都存在的,其从点到面的插值方式是由buildin funcionality
决定的,(goroud ?):

The vertex shader gets information associated with each vertex through the attribute qualified variable. Information is passed from the vertex shader to the fragment shader through varying qualified variables, whose declarations must match between the vertex and fragment shaders.
The fixed functionality located between the vertex and fragment processors will interpolate the per-vertex values written to this varying variable.

Execution of the preceding shaders occurs multiple times to process a single primitive, once per vertex for the vertex shader and once per fragment for the fragment shader.


//元素列优先
ma
 4 x 4 matrix of floating-point numbers
Just remember that the first index selects the column, not the row, and the second index selects the row.

//材质必须是客户输入值,换句话说 一个mesh 的点面材质光源都的user输入才行,在一次的vs-fs中.一次一般做一个object,
//不过因为能并行处理所以也会很快,最新的nvedia卡据说达到128线至多,我是没钱买
Hence, it provides a simple opaque handle to encapsulate what to look up. These handles are called SAMPLERS.
When the application initializes a sampler,
the OpenGL implementation stores into it whatever information is needed to communicate what texture to access.
Shaders cannot themselves initialize samplers. They can only receive them from the application,
through a uniform qualified sampler, or pass them on to user or built-in functions.
 As a function parameter, a sampler cannot be modified,
so there is no way for a shader to change a sampler's value.

//支持动态数组
Arrays, unless they are function parameters, do not have to be declared with a size. A declaration like

vec4 points[];

//充分说明过程性数据的转移过程:
Attribute, uniform, and varying variables cannot be initialized when declared.

//per glVertex客户程序输入
attribute float Temperature;  // no initializer allowed,
                              // the vertex API sets this
//per Enable Disable客户程序输入
uniform int Size;             // no initializer allowed,
                              // the uniform setting API sets this
//per Pixel 从vertex shader插值后输入 赋值为 per vertex 使用为 per pixel
varying float density;        // no initializer allowed, the vertex
                              // shader must programmatically set this

attribute
 For frequently changing information, from the application to a vertex shader
 
uniform
 For infrequently changing information, from the application to either a vertex shader or a fragment shader
 
varying
 For interpolated information passed from a vertex shader to a fragment shader
 
const
 For declaring nonwritable, compile-time constant variables, as in C
 
//基本上用了 vertex shading 之后原来的 pipeline 都不做了:

Specifically, when the vertex processor is executing a vertex shader, the following fixed functionality operations are affected:

The modelview matrix is not applied to vertex coordinates.

The projection matrix is not applied to vertex coordinates.

The texture matrices are not applied to texture coordinates.

Normals are not transformed to eye coordinates.

Normals are not rescaled or normalized.

Normalization of GL_AUTO_NORMAL evaluated normals is not performed.

Texture coordinates are not generated automatically.

Per-vertex lighting is not performed.

Color material computations are not performed.

Color index lighting is not performed.

Point size distance attenuation is not performed.

All of the preceding apply to setting the current raster position.


//裁减都是在 vs 之后做的

The following fixed functionality operations are applied to vertex values that are the result of executing the vertex shader:

Color clamping or masking (for built-in varying variables that deal with color but not for user-defined varying variables)

Perspective division on clip coordinates

Viewport mapping

Depth range scaling

Clipping, including user clipping

Front face determination

Flat-shading

Color, texture coordinate, fog, point size, and user-defined varying clipping

Final color processing

//原初 fixed func的扩展使用
If a vertex shader is active when glRasterPos is called, it processes the coordinates provided with the glRasterPos command just as if these coordinates were specified with a glVertex command. The vertex shader is responsible for outputting the values necessary to compute the current raster position data.

User clipping can be used in conjunction with a vertex shader. The user clip planes are specified as usual with the glClipPlane command. When specified, these clip planes are transformed by the inverse of the current modelview matrix.


terms:
clamp occlusion

//well 整个volume的计算当然就不是这里能做的了是object space的事情本质不是显示层的,当然这个也可单独在 vs里做,soft shadow 就在这里了
With deferred shading, the idea is to first quickly determine the surfaces that will be visible in the final scene and apply complex and time-consuming shader effects only to the pixels that make up those visible surfaces.

//textures已经成为通用内存单元
Textures can also store intermediate rendering results;
they can serve as lookup tables for complex functions;
they can store normals, normal perturbation factors, gloss values,
visibility information, and polynomial coefficients; and do many other things.
These things could not be done nearly as easily,
if at all, in unextended OpenGL,
 and this flexibility means that texture maps are coming closer to being general-purpose memory that can be used for arbitrary purposes.