UNITY SHADER学习笔记1(unity 5.6)

来源:互联网 发布:淘宝店铺装修店标尺寸 编辑:程序博客网 时间:2024/06/05 19:39

UNITY_UV_STARTS_AT_TOP

通常用于判断D3D平台,在开启抗锯齿的时候,图片采样可能会用到

UNITY_SINGLE_PASS_STEREO

单通道立体渲染,目前主要用于VR

UNITY_COLORSPACE_GAMMA

当前使用gamma空间,还是线性颜色空间

参数

变换相关

最常用的,估计没有不清楚的,都是float4*4

UNITY_MATRIX_MVP

model * view * projection

UNITY_MATRIX_MV

model * view

UNITY_MATRIX_V

view

UNITY_MATRIX_P

projection

UNITY_MATRIX_VP

view * projection

UNITY_MATRIX_T_MV

model * view的转置矩阵

UNITY_MATRIX_IT_MV

model * view的逆转矩阵

_Object2World _World2Object

如名

相机和屏幕相关

_WorldSpaceCameraPos

float3
摄像机的世界坐标

_ProjectionParams

float4
// x = 1 or -1 (很少用,-1说明rendering有翻转投影矩阵)
// y = near plane
// z = far plane
// w = 1/far plane

_ScreenParams

float4
用屏幕坐标除以屏幕分辨率,_ScreenParams.xy可以得到视口空间中的坐标
x is the camera’s render target width in pixels, y is the camera’s render target height in pixels, z is 1.0 + 1.0/width and w is 1.0 + 1.0/height

_ZBufferParams

float4
用于将z缓存线性化时使用
x is (1-far/near), y is (far/near), z is (x/far) and w is (y/far)

unity_OrthoParams

正交相机的相关参数
x is orthographic camera’s width, y is orthographic camera’s height, z is unused and w is 1.0 when camera is orthographic, 0.0 when perspective

unity_CameraProjection

float4*4
摄像机的投影矩阵

unity_CameraInvProjection

unity_CameraProjection的逆矩阵

unity_CameraWorldClipPlanes[6]

float4
裁剪空间下的一些参数
left, right, bottom, top, near, far

时间相关

单位是秒

_Time

float4
// (t/20, t, t*2, t*3)

_SinTime

float4
// sin(t/8), sin(t/4), sin(t/2), sin(t)

_CosTime

float4
// cos(t/8), cos(t/4), cos(t/2), cos(t)

unity_DeltaTime

float4
就是上一帧的时间,smooothdt为平滑时间,主要是防止间隔起伏太大
// dt, 1/dt, smoothdt, 1/smoothdt

灯光

灯光的参数比较复杂,根据渲染路径和Pass Tag中的LightMode设置,有一些不太一样

路径:forward(包括Base和Add)

_LightColor0

fixed4
Forward rendering (ForwardBase and ForwardAdd pass types)
Light Color

_WorldSpaceLightPos0

float4
Forward rendering (ForwardBase and ForwardAdd pass types)
Directional lights: (world space direction, 0). Other lights: (world space position, 1)

_LightMatrix0

float4*4
Forward rendering (ForwardBase and ForwardAdd pass types)
World-to-light matrix.Used to sample cookie & attenuation textures(多是用于实时阴影中)

unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0

float4
(ForwardBase pass only) world space positions of first four non-important point lights

unity_4LightAtten0

float4
灯光的衰减系数
(ForwardBase pass only) attenuation factors of first four non-important point lights

unity_LightColor

half4[4]
(ForwardBase pass only) colors of of first four non-important point lights

路径:Deferred

_LightColor

float4
Light Color

_LightMatrix0

float4*4
World-to-light matrix. Used to sample cookie & attenuation textures

路径:Vertex-lit

这个路径下,最多有8个灯光,从最亮的一个开始排序,少于8个,多出的都是黑色

unity_LightColor

half4[8]
Light Colors

unity_LightPosition

half4[8]
View-space light positions. (-direction,0) for directional lights; (position,1) for point/spot lights

unity_LightAtten

half4[8]
Light attenuation factors. x is cos(spotAngle/2) or –1 for non-spot lights; y is 1/cos(spotAngle/4) or 1 for non-spot lights; z is quadratic attenuation; w is squared light range

unity_SpotDirection

float4[8]
View-space spot light positions; (0,0,1,0) for non-spot lights

雾效和环境光相关

unity_AmbientSky

fixed4
Sky ambient lighting color in gradient ambient lighting case

unity_AmbientEquator

fixed4
Equator ambient lighting color in gradient ambient lighting case

unity_AmbientGround

fixed4
Ground ambient lighting color in gradient ambient lighting case

UNITY_LIGHTMODEL_AMBIENT

fixed4
平时用这个就可以了
Ambient lighting color (sky color in gradient ambient case). Legacy variable

unity_FogColor

fixed4
Fog Color

unity_FogParams

float4
雾效的三种计算方式需要的参数
Parameters for fog calculation: (density / sqrt(ln(2)), density / ln(2), –1/(end-start), end/(end-start)). x is useful for Exp2 fog mode, y for Exp mode, z and w for Linear mode

方法

UnityObjectToClipPos

inline float4 UnityObjectToClipPos(in float3 pos){    // More efficient than computing M*VP matrix product 既然unity说这样更快,那就用这个吧    return mul(UNITY_MATRIX_VP, mul(unity_ObjectToWorld, float4(pos, 1.0)));}inline float4 UnityObjectToClipPos(float4 pos) // overload for float4; avoids "implicit truncation" warning for existing shaders{    return UnityObjectToClipPos(pos.xyz);}

ComputeGrabScreenPos

计算抓取的屏幕上的UV(xy),输入值为裁剪空间坐标

inline float4 ComputeGrabScreenPos (float4 pos) {    #if UNITY_UV_STARTS_AT_TOP    float scale = -1.0;    #else    float scale = 1.0;    #endif    float4 o = pos * 0.5f;    o.xy = float2(o.x, o.y*scale) + o.w;#ifdef UNITY_SINGLE_PASS_STEREO    o.xy = TransformStereoScreenSpaceTex(o.xy, pos.w);#endif    o.zw = pos.zw;    return o;}

TRANSFORM_TEX

#define TRANSFORM_TEX(tex,name) (tex.xy * name##_ST.xy + name##_ST.zw)

ComputeScreenPos

计算屏幕坐标(应该说视口坐标吧),输入值在裁剪空间,这个方法想起一些东西,还是多想想吧
1.和以前的方法不一样了,主要是增加了对VR的支持
2.这个方法并没有进行齐次除法,因为这个方法在vertex中,还需要插值(裁剪空间的差值是非线性的),所以在frag中要获得视口空间坐标,还需要手动进行齐次除法
3投影矩阵的本质是对x y z进行不同程度的缩放,之前w为1,但是经过投影矩阵后,w为之前的z取反(在摄像机空间,右手坐标系,所以z是越大,离摄像机越近,而且投影矩阵也叫裁剪矩阵,因为经过投影矩阵后,xyz在-w w之间的才不会被裁剪,这里以前有个思想错误,为什么xyz除以w会在-1到1之间,其实是因为在裁剪矩阵后,以近裁剪面为例,xyz在Near和-Near,远裁剪面xyz在Far和-Far之间,w为Near和Far之间,就是和摄像机的距离),之所以想到这个,是因为在裁剪之后(也就是和MVP矩阵相乘后),就到了屏幕空间,进行齐次除法得到NDC坐标,xyz范围在-1到1(opengl,unity也采用,D3D为0到1),这时就会到一个正方体中,这时,要计算屏幕坐标(opengl的习惯,左下为0,0,右上为屏幕分辨率),简单来说,X(screen)=X(clip)/W(clip)[NDC坐标]\2[在 -1到1之间映射嘛]*pixelW[横向分辨率]+pixelW/2[应该好理解],但是,这里是计算视口坐标,就是0到1之间,也是左下为0,从NDC里面映射一下就好了
所有这个方法只是完成了一半

另一种方法是用VPOS就不说了

inline float4 ComputeScreenPos(float4 pos) {    float4 o = ComputeNonStereoScreenPos(pos);#if defined(UNITY_SINGLE_PASS_STEREO)    o.xy = TransformStereoScreenSpaceTex(o.xy, pos.w);#endif    return o;}

ComputeNonStereoScreenPos

顾名思义,没有立体渲染的时候的屏幕坐标

inline float4 ComputeNonStereoScreenPos(float4 pos) {    float4 o = pos * 0.5f;    o.xy = float2(o.x, o.y*_ProjectionParams.x) + o.w;    o.zw = pos.zw;    return o;}

TransformStereoScreenSpaceTex

unity_StereoScaleOffset一个长度为2的数组,代表两个眼睛的一些偏移值等,还是用在VR立体渲染中的

float2 TransformStereoScreenSpaceTex(float2 uv, float w){    float4 scaleOffset = unity_StereoScaleOffset[unity_StereoEyeIndex];    return uv.xy * scaleOffset.xy + scaleOffset.zw * w;}

ShadeSH9

一个很常见的函数,输入的法线需要归一化,得到的值会根据当前颜色空间细分处理
计算球谐光照,没有作为像素光和顶点光的光源,入light probe,都在这里处理
这让我想起unity中前向渲染中,光的处理办法由重要性排序有逐顶点处理,逐像素处理,球谐函数,一定数量的光源按逐像素(最亮的平行光 Important光源,小于Quality Setting中逐像素光源数量时,Not Important也会逐像素,然后最多4个按照逐顶点处理,其他的用SH处理)

half3 ShadeSH9 (half4 normal){    // Linear + constant polynomial terms    half3 res = SHEvalLinearL0L1 (normal);    // Quadratic polynomials    res += SHEvalLinearL2 (normal);#   ifdef UNITY_COLORSPACE_GAMMA        res = LinearToGammaSpace (res);#   endif    return res;}

COMPUTE_EYEDEPTH

unity的模型空间和世界空间用左手坐标系,在观察空间有右手,openGL的传统

#define COMPUTE_EYEDEPTH(o) o = -UnityObjectToViewPos( v.vertex ).z
原创粉丝点击