[初学Unity]Graphics-02.Rendering and Shading

来源:互联网 发布:mysql truncate速度 编辑:程序博客网 时间:2024/06/06 05:52

1. The Standard Shader

The standard is used to create Unity’s default material.
To choose the Standard Shader using the specular approach select Standard(Specular Setup). Otherwise use Standard for the metallic approach.

There are 3 sections to the standard shader. Rendering Mode, Main Maps and Secondary Maps.

Rendering Mode Section

Main Maps Section

The main map section defines the look of the material.

There are a few subjects that are worth covering first.
1. Optimisation. The standard shader is highly optimised. When the standard shader is built, two important things happen. All properties that are not being used are discarded. The build target is checked and the shader is optimised for that device.
2. Physically-based shading. Physically-based shading tires to define certain physical aspects of a material’s surface. Including it’s diffuse colour, specular reflection and other properties so the materials behaves correctly and believably in all lighting environments.

In the main map section, each of these properties control one aspect of the final material. Each property can be defined by a texture map.

It is worth noting that when using a texture to define the metalness, the smoothness value must also be defined by that textures’s alpha channel. It is also worth noting that the metalness value is stored only in the red in the red channel of the metalness map’s RGB values. The green and blue channels are ignored. It is often easier however to visualise the metalness values of a texture if all three colour channels share the same map, so the texture appears as a greyscale image.

The Secondary Maps Section

The secondary maps are used to define additional surface detail. This additional detail, sometimes referred to as micro detail, is added on top of the surface defined by the main maps.

Working with Physically Based Rendering (Blog)

Authoring Physically-based Shading Materials

To avoid the guesswork involved in emulating real world materials, it is useful to follow a reliable known reference.The Standard Shader supports both a Specular Color and a Metallic workflow. They both define the color of the reflections leaving the surface. In the Specular workflow, color is specified directly, whilst in the Metallic workflow, the color is derived from a combination of the diffuse color and the metallic value set in the Standard Shader controls.

For the Viking Village project, we used the Standard Shader’s Specular Color Workflow. Our calibration scene, which you can download from the Asset Store, includes some handy calibration charts. We referenced the charts regularly when designing our materials.

When approaching materials you can choose between what we call the Specular and the Metallic workflows, each with its own set of values and a reference chart. In the Specular workflow you choose the color of the specularly reflected light directly, in the metallic workflow you choose if the material behaves like a metal when it is illuminated.

The specular value chart:

Specular Value Charts

The metallic value chart:

Metallic Value Charts

Choosing between Specular or Metallic workflows is largely a matter of personal preference, you can usually get the same result whichever workflow you choose to use.

Aside from charts and values, gathering samples of real world surfaces is highly valuable. It is of great help to find the surface type you are trying to imitate and try to get an understanding of how it reacts to light.

Setting up the material

When starting out, it’s often useful to create a plain but tweakable representation of the materials using colors, values and sliders derived from the calibration charts. Then, you can apply textures while keeping the original material as a reference to confirm that characteristics are preserved.

Top row: untextured. Bottom row: textured. Left to right: Rock, Wood, Bone, Metal
Top row: untextured. Bottom row: textured. Left to right: Rock, Wood, Bone, Metal

About the calibration scene

Those having trouble importing the asset via Asset Store window, get the .zip file here (I hope it’s OK to publicize the link):
https://oc.unity3d.com/public.php?service=files&t=18b2bab9dfb976f05465244d0dd6344c

A very useful resource to those wanting a better understanding of Unity’s new Physically Based Shader and how to set up materials using both Specular and Default (metallic) setup. From the example materials, you’ll get a good idea on how to configure the shader / create maps for different elements such as metal, wood, skin, etc.

PHYSICALLY BASED SHADING IN UNITY 5: A PRIMER

What is Physically Based Shading? Physically Based Shading (PBS for short) simulates the interactions between materials and light in a way that mimics reality. PBS has only recently become possible in real-time graphics. In situations where lighting and materials need to play together intuitively and realistically, it’s a big win.

Context and Content:When thinking about lighting in Unity, it is handy to divide the concepts into what we call the content the item being lit and rendered, and the context, which is the lighting that exists in the scene which affects the object being lit.

Global illumination is an important part of the context that’s needed for PBS. To get a comprehensive overview of how it will work in Unity 5 nothing better than to check our blogpost on Dynamic GI.

GLOBAL ILLUMINATION IN UNITY 5

待阅读!!!!!

EXTENDING UNITY 5 RENDERING PIPELINE: COMMAND BUFFERS

For Unity 5, we settled on ability to create “list of things to do” buffers, which we dubbed “Command Buffers“.

A command buffer in graphics is a low-level list of commands to execute. For example, 3D rendering APIs like Direct3D or OpenGL typically end up constructing a command buffer that is then executed by the GPU. Unity’s multi-threaded renderer also constructs a command buffer between a calling thread and the “worker thread” that submits commands to the rendering API.

In our case the idea is very similar, but the “commands” are somewhat higher level. Instead of containing things like “set internal GPU register X to value Y”, the commands are “Draw this mesh with that material” and so on.

Actually, here’s a small Unity (5.0 beta 22) project folder that demonstrates everything above: RenderingCommandBuffers50b22.zip.
未完成这个例子的学习!!!!!!

OPTIMIZING SHADER INFO LOADING, OR LOOK AT YER DATA!

待阅读!!!!!!!

2. Materials

How to control the visual appearance of gameobjects by assigning materials to control shaders, colours and textures on a renderer.

Materials are used in conjunction with Mesh Renderers, Particle Systems and other rendering components used in Unity. They play an essential part in defining how your object is displayed.

A typical Material inspector
A typical Material inspector

The properties that a Material’s inspector displays are determined by the Shader that the Material uses. A shader is a specialised kind of graphical program that determines how texture and lighting information are combined to generate the pixels of the rendered object onscreen.

3. Textures

What are Textures? How does Unity use and import them?

A texture is a image file. The most common use of a texture is when applied to a material in the base texture property to give a mesh a textured surface.

Textures can be any image files supported by Unity. These can be photos straight from a digital source, but texture are usually images created or manipulated in an image editor, like Photoshop or Gimp. **It is important to note that the layered files will be flattened on import but the layers are maintained in Unity in the original file. T**his means we can turn layers on and off without loss when we’re setting up our game but when we’re running our game we will not have access to these layers individually.
Most of the image file formats used by Unity support transparency. The notable exception is JPEG, which does not. For more information on the transparency and how to use it, see the documentation on materials and shaders.
Texture files should be saved in the assets folder. Unity search the assets folder in a specific order when seeking materials. For more information on searching for textures and seeking materials see the lesson on mesh importing. (???WHERE???)

If we need to set transparency or alpha channel for the texture automatically based on the light and dark parts of the image, we can check the Alpha From Greyscale field. Black will be completely transparent, white will be completely opaque.

更多其他选项:2D Textures

Texture (Script Reference)

4. A Gentle Introduction to Shaders

注:待读!!!!!!!
Shader Reference
Materials, Shaders & Textures

This tutorial will gently introduce you to shader coding, and is oriented to developers with little to no knowledge about shaders. Tutorials to extend the knowledge gained from this one can be found at Alan Zucconi’s site.

Introduction

The diagram below loosely represents the three different entities which plays a role in the rendering workflow of Unity:

rendering workflow

3D models are, essentially, a collection of 3D coordinates called vertices. They are connected together to make triangles. Each vertex can contain few other informations, such as a colour, the direction it points towards (called normal) and some coordinates to map textures onto it (called UV data).

Models cannot be rendered without a material. Materials are wrappers which contain a shader and the values for its properties. Hence, different materials can share the same shader, feeding it with different data.

Anatomy of A Shader

Unity supports two different types of shaders: suface shaders and fragment and vertex shaders. There is a third type, the fixed function shaders, but they’re now obsolete and will not be covered here. Regardless which type fits your needs, the anatomy of a shader is the same for all of them:

Shader "MyShader"{    Properties    {        // The properties of your shaders        // - textures        // - colours        // - parameters        // ...    }    SubShader    {        // The code of your shaders        // - surface shader        //    OR        // - vertex and fragment shader        //    OR        // - fixed function shader    }   }

You can have multiple SubShader sections, one after the other. They contain the actual instructions for the GPU. Unity will try to execute them in order, until it finds one that is compatible with your graphics card. This is useful when coding for different platforms, since you can fit different versions of the same shader in a single file.

The Properties

The properties of your shader are somehow equivalent to the public fields in a C# script; they’ll appear in the inspector of your material, giving you the chance to tweak them. Unlike what happens with a script, materials are assets: changes made to the properties of a material while the game is running in the editor are permanent. Even after stopping the game, you’ll find the changes you made persisting in your material.

The following snippet covers the definition of all the basic types of properties you can have in a shader:

Properties{    _MyTexture ("My texture", 2D) = "white" {}    _MyNormalMap ("My normal map", 2D) = "bump" {}  // Grey    _MyInt ("My integer", Int) = 2    _MyFloat ("My float", Float) = 1.5    _MyRange ("My range", Range(0.0, 1.0)) = 0.5    _MyColor ("My colour", Color) = (1, 0, 0, 1)    // (R, G, B, A)    _MyVector ("My Vector4", Vector) = (0, 0, 0, 0) // (x, y, z, w)}

The type 2D, used used for _MyTexture and _MyNormalMap, indicates that the parameters are textures. They can be initialised to white, black or gray. You can also use bump to indicate that the texture will be used as a normal map. Vectors and Colors always have four elements (XYZW and RGBA, respectively).

The image below shows how these properties appear in the inspector, once the shader is attached to a material.

Unfortunately, this is not enough to use our properties. The section Properties, in fact is used by Unity to give access from the inspector to the hidden variables within a shader. These variables still need to be defined in the actual body of the shader, which is contained in the SubShader section.

SubShader{    // Code of the shader    // ...    sampler2D _MyTexture;    sampler2D _MyNormalMap;    int _MyInt;    float _MyFloat;    float _MyRange;    half4 _MyColor;    float4 _MyVector;    // Code of the shader    // ...}

The type used for texture is sampler2D. Vectors are float4 and colours are generally half4 which use 32 and 16 bits, respectively. The language used to write shaders, Cg / HLSL, is very pedantic: the name of the parameters must match exactly with the one previously defined. The types, however, don’t need to: you won’t get any error for declaring _MyRange as half, instead of float. Something rather confusing is the fact that if you can define a property of type Vector, which is linked to a float2 variable; the extra two values will be ignored by Unity.

The Rendering Order

As already mentioned, the SubShader section contains the actual code of the shader, written in Cg / HLSL which closely resembles C. Loosely speaking, the body of a shader is executed for every pixel of your image; performance here is critical. Due to the architecture of the GPUs, there is a limit on the number of instructions you can perform in a shader. It is possible to avoid this dividing the computation in several passes, but this won’t be covered in this tutorial.

The body of a shader, typically looks like this:

SubShader{    Tags    {        "Queue" = "Geometry"        "RenderType" = "Opaque"    }    CGPROGRAM    // Cg / HLSL code of the shader    // ...    ENDCG}

The actual Cg code is contained in the section signalled by the CGPROGRAM and ENDCG directives.

Before the actual body, the concept of tags is introduced. Tags are a way of telling Unity certain properties of the shader we are writing. For instance, the order in which it should be rendered (Queue) and how it should be rendered (RenderType).

When rendering triangles, the GPU usually sort them according to their distance from the camera, so that the further ones are drawn first. This is typically enough to render solid geometries, but it often fails with transparent objects. This is why Unity allows to specify the tag Queue which gives control on the rendering order of each material. Queue accepts integer positive numbers (the smaller it is, the sooner is drawn); mnemonic labels can also be used:

  • Background (1000): used for backgrounds and skyboxes,
  • Geometry (2000): the default label used for most solid objects,
  • Transparent (3000): used for materials with transparent properties, such glass, fire, particles and water;
  • Overlay (4000): used for effects such as lens flares, GUI elements and texts.

Unity also allows to specify relative orders, such as Background+2, which indicates a queue value of 1002. Messing with Queue can generate nasty situations in which an object is always drawn, even when it should be covered by other models.

ZTest

It is important to remember, however, that an object from Transparent doesn’t necessarily always appear above an object from Geometry. The GPU, by default, performs a test called ZTest which stops hidden pixels from being drawn. To work, it uses an extra buffer with the same size of the screen its rendering to. Each pixel contains the depth (distance from the camera) of the object drawn in that pixel. If we are to write a pixel which is further away than the current depth, the pixel is discarded. The ZTest culls the pixel which are hidden by other object, regardless the order in which they are drawn onto the screen.

Surface VS Vertex And Fragment

The last part which needs to be covered is the actual code of the shader. Before doing this, we’ll have to decide which type of shader to use. This section will give a glimpse of how shaders look like, but it won’t really explain them. Both surface and vertex and fragment shaders will be extensively covered in the next parts of this tutorial.

The Surface Shader

Whenever the material you want to simulate needs to be affected by lights in a realistic way, chances are you’ll need a surface shader. Surface shaders hide the calculations of how light is reflected and allows to specify “intuitive” properties such as the albedo, the normals, the reflectivity and so on in a function called surf. These values are then plugged into a lighting model which will output the final RGB values for each pixel. Alternatively, you can also write your own lighting model, but this is only needed for very advanced effects.

The Cg code of a typical surface shader looks like this:

CGPROGRAM// Uses the Lambertian lighting model#pragma surface surf Lambertsampler2D _MainTex; // The input texturestruct Input {    float2 uv_MainTex;};void surf (Input IN, inout SurfaceOutput o) {    o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;}ENDCG

In this example a texture is input using the line sampler2D _MainTex; , which is then set as the Albedo property of the material in the surf function. The shader uses a Lambertian lighting model, which is a very typical way of modelling how light reflects onto an object. Shaders which only use the albedo property are typically called diffuse.

The Vertex And Fragment Shader

Vertex and fragment shaders work close to the way the GPU renders triangles, and have no built-in concept of how light should behave. The geometry of your model is first passed through a function called vert which can alter its vertices. Then, individual triangles are passed through another function called frag which decides the final RGB colour for every pixel. They are useful for 2D effects, postprocessing and special 3D effects which are too complex to be expressed as surface shaders.

The following vertex and fragment shader simply makes an object uniformly red, with no lighting:

Pass {    CGPROGRAM    #pragma vertex vert                 #pragma fragment frag    struct vertInput {        float4 pos : POSITION;    };      struct vertOutput {        float4 pos : SV_POSITION;    };    vertOutput vert(vertInput input) {        vertOutput o;        o.pos = mul(UNITY_MATRIX_MVP, input.pos);        return o;    }    half4 frag(vertOutput output) : COLOR {        return half4(1.0, 0.0, 0.0, 1.0);     }    ENDCG}

The vert function converts the vertices from their native 3D space to their final 2D position on the screen. Unity introduces the UNITY_MATRIX_MVP to hide the maths behind it. After this, the return of the frag function gives a red colour to every pixel. Just remember that the Cg section of vertex and fragment shaders need to be enclosed in a Pass section. This is not the case for simple surface shaders, which will work with or without it.

Conclusion

This tutorial gently introduces the two types of shaders available in Unity and explains when to use one over the other. Further tutorials following from this one can be found at Alan Zucconi’s site.

5. Using Detail Textures For Extra Realism Close-Up

A detail texture is a pattern that is faded in gradually on a mesh as the camera gets close. This can can be used to simulate dirt, weathering or other similar detail on a surface without adding to rendering overhead when the camera is too far away to see the difference. This lesson explains how to use detail textures with Unity.

Obtaining the Texture

A detail texture is a greyscale image that is used to lighten or darken another texture selectively. Where a pixel has a brightness value between 0 and 127 the image will be darkened (where zero denotes maximum darkness) and when the value is between 129 and 255, the image will be lightened (where 255 denotes maximum brightness). A value of exactly 128 will leave the underlying image unchanged.

Unity Import Settings For the Detail Texture

The Detail Material

6. Frame Debugger

In this video we’ll look at how to use Unity’s Frame Debugger to analyze and trouble shoot(故障排除) graphical performance.

In Unity rendering a frame to the screen occurs a linear sequence of events. The Frame Debugger is a tool which lets you see all the steps involved in rendering a frame step by step. It allows you to see the intermediate render targets, shadow maps and each draw call that contributes to the frame, in order.
The Frame Debugger is accessed from Window/Frame Debugger.

When setting up a new project we can choose between 4 different render paths. Each render path renders content differently. Choosing which to use is highly dependent on the nature of your project.
Each render path can use a different number of render targets. A render target is a feature of GPUs that allow a scene to be rendered to an intermediate memory buffer.
Using the Forward rendering path 1 render target is used by default. Here shown as RT0, along with a depth buffer. Using customer shader it’s possible to use more render targets in forward rendering but using the standard shader this is the default behaviour.
In differed rendering mode multiple render targets are used. When rendering to multiple targets at once you can select which display using the drop down menu. In this example we can see the diffuse, specular, normals, and emission and indirect lighting buffers.
When an event is highlighted information will be displayed about the event in the information panel.

Frame Debugger

0 0
原创粉丝点击