Normal Mapping

来源:互联网 发布:java和go interface 编辑:程序博客网 时间:2024/04/27 00:12

转载:http://blog.sina.com.cn/s/blog_556e97420100smkw.html

很久以前研究过Normal Mapping,对其中的Tangent Space(切空间)总是不能理解。花了很长时间,读了很多相关的帖子,才总算搞明白。不过我总觉得,大部分都讲的不是很好,一上来就讲 TBN 变换矩阵的数学推导过程,太抽象。看完以后,还是不明白什么是TBN。至少我不明白,可能是我太笨。

 

后来从网上下了 《Introduction to 3D Game Programming with DirectX 10》,发现里面的“Chapter 12: Normal Mapping”,对这个问题讲得很清楚,看了例图一下就明白了。所以把这一章翻译成中文,供大家参考。

 

文章版权归原作者(Frank D. Luna)所有。译文有不妥之处,请参考原文。

概述
在第7章,我们介绍了纹理映射,使我们能够从一个图片映射图像细节到三角形。
然而,我们的法向量仍定义在顶点级,并在三角形内插值。
在这一章中,我们研究一种流行的方法来在更高的分辨率下指定表面法线。

 

目标:

理解为什么我们需要法向量映射。

理解法向图是如何存储的。

学习如何创建法向图。

确定与法向图中存储的法向量相关的坐标系统,以及它如何与一个3D三角形的物体空间相联系。

学习如何用Vertex Shader和Pixel Shader实现法向量映射。

 

原文:

Overview

In Chapter 7, we introduced texture mapping, which enabled us to map fine details from an image onto our triangles. However, our normal vectors are still defined at the coarser vertex level and interpolated over the triangle. In this chapter, we study a popular method for specifying surface normals at a higher resolution.

The Normal Map demo for this chapter is available in the download files.

Objectives:

  • To understand why we need normal mapping.

  • To discover how normal maps are stored.

  • To learn how normal maps can be created.

  • To determine the coordinate system the normal vectors in normal maps are stored relative to, and how it relates to the object space coordinate system of a 3D triangle.

  • To learn how to implement normal mapping in a vertex and pixel shader.

12.1 动机
考虑图12.1,来自前面章节中的CubeMap示例。在锥形柱上的镜面光看起来不对 - 与砖纹理的凹凸不平比起来,他们看起来平滑的不自然。这是因为纹理下的几何网格是平滑的,我们只是将凹凸的砖的纹理应用在光滑的圆柱表面。然而,光照计算是基于网格几何(尤其是插值顶点法线),而不是纹理图像。因此,光照不完全与纹理一致。
 Normal <wbr>Mapping <wbr>(3)
 图12.1
 
理想的情况下,我们将镶嵌足够多的网格几何体,使得实际凹凸和砖块缝隙可以由纹理下的几何模型构建。这样,光照和纹理会保持一致。然而,镶嵌一个如此分辨率的网格是不实际的,这将会大大增加顶点数和三角形数。
 
另一种可能的解决办法是将光照细节直接存储到纹理中。不过,如果光源移动的话,这个方法将不会奏效,即使光源移动,纹理颜色依然保持固定。
 
因此,我们的目标是找到一种方法来实现动态光照,这样即显示在纹理图上的细节也显示了在光照中。由于纹理提供了细节,因此找到一个纹理映射方案来解决这个问题是很自然地。图12.2显示了同样的带法向量映射的场景,我们可以看到现在的动态照明与砖的纹理一致。
Normal <wbr>Mapping <wbr>(3) 
图12.2
 
原文:
 
12.1 Motivation
Consider Figure 12.1 from the Cube Map demo of the preceding chapter (see the download files). The specular highlights on the cone-shaped columns do not look right — they look unnaturally smooth compared to the bumpiness of the brick texture. This is because the underlying mesh geometry is smooth, and we have merely applied the image of bumpy bricks over the smooth cylindrical surface. However, the lighting calculations are performed based on the mesh geometry (in particular, the interpolated vertex normals), and not the texture image. Thus the lighting is not completely consistent with the texture.
 
Ideally, we would tessellate the mesh geometry so much that the actual bumps and crevices of the bricks could be modeled by the underlying geometry. Then the lighting and texture could be made consistent. However, tessellating a mesh to such a resolution is not practical due to the huge increase in vertex and triangle count.
Another possible solution would be to bake the lighting details directly into the textures. However, this will not work if the lights are allowed to move, as the texel colors will remain fixed as the lights move.
 
Thus our goal is to find a way to implement dynamic lighting such that the fine details that show up in the texture map also show up in the lighting. Since textures provide us with the fine details to begin with, it is natural to look for a texture mapping solution to this problem. Figure 12.2 shows the same scene with normal mapping; we can see now that the dynamic lighting is much more consistent with the brick texture.
 

12.2 法向图
法向图是纹理图,但是不是存储每个纹理像素的RGB值,而是分别在红色分量,绿色分量和蓝色分量中存储X,Y和Z坐标。这些坐标定义一个法向量,因此法向图在每个像素存储法向量。图12.3显示了如何可视化法向图的例子。
Normal <wbr>Mapping <wbr>(4)
图12.3:法向量定义在一个由向量T(X轴),B(Y轴)和N(Z轴)定义的,相对于纹理空间坐标系的法向图中。T向量水平于的纹理图,B向量垂直向下与纹理图,N正交于纹理平面。


为便于说明,我们假设使用24位图像格式,它为每个颜色分量保留一个字节,因此,每种颜色成分的范围可以从0到255。(对于32位格式,alpha分量可闲置无用或储存其他一些标量值。此外,如果使用浮点格式则不需要这种压缩,当然这需要更多的内存。)

注意 如图12.3所示,向量通常多与Z轴平行。也就是说,z坐标是最大值的​​。因此,法向图通常会表现为蓝色。这是因为Z坐标存储在蓝色通道,并且因为它拥有最大的规模,这种颜色占主导地位。
 
那么,我们如何把一个单位向量压缩成这种格式?首先注意到,对于一个单位向量,每个坐标始终在区间[-1,1]。如果我们转换并缩放这个范围到[0,1],再乘以255,截断小数,其结果将是一个在0到255的整数。也就是说,如果x是一个在区间[-1,1]的坐标,则f(x)的整数部分是在范围0到255,其中f是定义为f(x)=(0.5x+0.5)*255
 
因此,要在24位图像中存放一个单位向量,我们只要将f应用到每个坐标,并将坐标写入纹理图中相应的颜色通道。

接下来的问题是如何恢复压缩过程,也就是说,给定一个范围为0-255的压缩纹理坐标,如何才能恢复回区间[-1,1]呢?答案是简单地反转函数f,思考到这一点后:f(x)=(2x/255)-1
 
也就是说,如果x是在0到255范围内的整数,则f(x)是一个范围[-1,1]的浮点数

我们不需要自己做这个压缩过程,我们将使用Photoshop插件将图像转换到法向图。然而,当我们在Pixel Shader中采样法向图时,我们将必须做逆运算的一部分,以解压缩。当我们在Pixel Shader中采样法向图时:
float3 normalT = gNormalMap.Sample(gTriLinearSam,pIn.texC);

颜色矢量normalT将标准化(r,g,b),使得0≤r,g,b≤1。因此,该方法已经为我们做了​​解压缩的工作(即除以255,转换范围为0到255的整数到浮点区间[0,1])部分。我们用函数g:[0,1]→[-1,1],通过扩大在每个分量从[0,1]到[-1,1]来完成变换,g(x)=2x-1

在代码中,我们将对每个颜色分量应用该函数:
normalT = 2.0f*normalT - 1.0f;

这能工作,是因为标量1.0被参数化为向量(1,1,1),使得这样的表达式是有意义的。

 

原文:

 

12.2 Normal Maps
A normal map is a texture, but instead of storing RGB data at each texel, we store a compressed x-coordinate, y-coordinate, and z-coordinate in the red component, green component, and blue component, respectively. These coordinates define a normal vector; thus a normal map stores a normal vector at each pixel. Figure 12.3 shows an example of how to visualize a normal map.

 
Figure 12.3: Normals stored in a normal map relative to a texture space coordinate system defined by the vectors T (x-axis), B (y-axis), and N (z-axis). The T vector runs right horizontally to the texture image, the B vector runs down vertically to the texture image, and N is orthogonal to the texture plane.
For illustration, we will assume a 24-bit image format, which reserves a byte for each color component, and therefore, each color component can range from 0 to 255. (A 32-bit format could be employed where the alpha component goes unused or stores some other scalar value. Also, a floating-point format could be used in which no compression is necessary, but this requires more memory, of course.)

 Note  As Figure 12.3 shows, the vectors are generally mostly aligned with the z-axis. That is, the z-coordinate has the largest magnitude. Consequently, normal maps usually appear mostly blue when viewed as a color image. This is because the z-coordinate is stored in the blue channel and since it has the largest magnitude, this color dominates.
 

So how do we compress a unit vector into this format? First note that for a unit vector, each coordinate always lies in the range [−1, 1]. If we shift and scale this range to [0, 1] and multiply by 255 and truncate the decimal, the result will be an integer in the range 0 to 255. That is, if x is a coordinate in the range [−1, 1], then the integer part of f(x) is an integer in the range 0 to 255, where f is defined by

 
So to store a unit vector in a 24-bit image, we just apply f to each coordinate and write the coordinate to the corresponding color channel in the texture map.

The next question is how to reverse the compression process; that is, given a compressed texture coordinate in the range 0 to 255, how can we recover its true value in the interval [−1, 1]? The answer is to simply invert the function f, which after a little thought, can be seen to be:

 
That is, if x is an integer in the range 0 to 255, then f−1(x) is a floating-point number in the range [−1, 1].

We will not have to do the compression process ourselves, as we will use a Photoshop plug-in to convert images to normal maps. However, when we sample a normal map in a pixel shader, we will have to do part of the inverse process to uncompress it. When we sample a normal map in a shader like this:

float3 normalT = gNormalMap.Sample( gTriLinearSam, pIn.texC );

the color vector normalT will have normalized components (r, g, b) such that 0 ≤ r, g, b ≤ 1. Thus, the method has already done part of the uncompressing work for us (namely the divide by 255, which transforms an integer in the range 0 to 255 to the floating-point interval [0, 1]). We complete the transformation by shifting and scaling each component in [0, 1] to [−1, 1] with the function g: [0, 1] → [−1, 1] defined by:

 
In code, we apply this function to each color component like this:

// Uncompress each component from [0,1] to [-1,1].
normalT = 2.0f*normalT - 1.0f;

This works because the scalar 1.0 is augmented to the vector (1, 1, 1) so that the expression makes sense and is done componentwise.

12.3纹理/切空间
考虑一个带纹理映射的3D三角形。为了讨论方便,假设没有任何的纹理映射的图像失真,换句话说,映射到3D三角形的纹理三角形只需要一个刚体变换(平移和旋转)。现在,假设纹理像一个贴纸。我们拿起贴纸,平移它,然后在三角形上旋转。图12.4展示了如何将纹理空间轴关联到3D三角形:他们(译者注:纹理坐标轴)和三角形是相切的,并且和三角形在一个平面上。当然,三角形的纹理坐标是相对于纹理空间坐标系的。加上三角形面法线N,我们得到一个在三角形平面上的,基于TBN的三维坐标,我们称之为纹理空间或切空间。请注意,不同的三角形的切空间通常也不同(见图12.5)。

Normal <wbr>Mapping <wbr>(5) 
图12.4:纹理空间和物体空间关系。
Normal <wbr>Mapping <wbr>(5) 
图12.5:盒子的的每个面的纹理空间是不同的。

 

现在,如图12.3所示,法向图中的法向量是相对于纹理空间的。但是,我们的光源是在世界空间。为了实现光照,法向量和光源需要在同一个空间坐标系。因此,我们的第一步就是,把切空间与对象空间坐标系关联起来。一旦我们在物体空间,我们可以利用世界矩阵从物体空间到世界空间(在下一节介绍这个细节)。v0,v1,v2为三角形顶点,相应的纹理坐标,在纹理空间中为(u0,v0),(u1,v1),(u2,v2)。令e0 = v1- v0, e1 = v2 - v0,是三角形两个边向量,对应的纹理向量(Δu0,Δv0)=(u1-u0,v1-v0)和(Δu1,Δv1)=(u2 -u0,v2 - v0)。根据图12.4,显然有,

e0 = Δu0*T + Δv0*B
e1 = Δu1*T + Δv1*B
 
用矩阵方程来表示相对于物体的空间的向量,我们得到了:

 Normal <wbr>Mapping <wbr>(5)

注意到我们知道三角形顶点在物体空间中的坐标,因此我们知道边向量在物体空间中的坐标,因此有矩阵

Normal <wbr>Mapping <wbr>(5)
 
同样,我们知道纹理坐标,因此有矩阵

Normal <wbr>Mapping <wbr>(5)

对T和B的对象坐标空间求解,我们得到:

Normal <wbr>Mapping <wbr>(5)
 
上面,我们用了事实,那就是一个矩阵的逆就是:

Normal <wbr>Mapping <wbr>(5)

请注意,向量T和B,在物体空间中一般不会是单位长度,并且如果有纹理失真,他们也非正交。
T,B和N通常分别称为切线,副法线,和法向量。

 

原文:

 

12.3 Texture/Tangent Space
Consider a 3D texture mapped triangle. For the sake of discussion, suppose that there is no distortion in the texture mapping; in other words, mapping the texture triangle onto the 3D triangle requires only a rigid body transformation (translation and rotation). Now, suppose that the texture is like a decal. So we pick up the decal, translate it, and rotate it onto the 3D triangle. Figure 12.4 shows how the texture space axes relate to the 3D triangle: They are tangent to the triangle and lie in the plane of the triangle. The texture coordinates of the triangle are, of course, relative to the texture space coordinate system. Incorporating the triangle face normal N, we obtain a 3D TBN-basis in the plane of the triangle that we call texture space or tangent space. Note that the tangent space generally varies from triangle to triangle (see Figure 12.5).

 
Figure 12.4: The relationship between the texture space of a triangle and the object space.
 
Figure 12.5: The texture space is different for each face of the box.
Now, as Figure 12.3 shows, the normal vectors in a normal map are defined relative to the texture space. But our lights are defined in world space. In order to do lighting, the normal vectors and lights need to be in the same space. So our first step is to relate the tangent space coordinate system with the object space coordinate system the triangle vertices are relative to. Once we are in object space, we can use the world matrix to get from object space to world space (the details of this are covered in the next section). Let v0, v1, and v2 define the three vertices of a 3D triangle with corresponding texture coordinates (u0, v0), (u1, v1), and (u2, v2) that define a triangle in the texture plane relative to the texture space axes (i.e., T and B). Let e0 = v1 ? v0 and e1 = v2 ? v0 be two edge vectors of the 3D triangle with corresponding texture triangle edge vectors ( Δu0, Δv0) = (u1 ? u0, v1 ? v0) and (Δu1, Δv1) = (u2 ? u0, v2 ? v0). From Figure 12.4, it is clear that

 
Representing the vectors with coordinates relative to object space, we get the matrix equation:

 
Note that we know the object space coordinates of the triangle vertices; hence we know the object space coordinates of the edge vectors, so the matrix

 
is known. Likewise, we know the texture coordinates, so the matrix

 
is known. Solving for the T and B object space coordinates we get:

 
In the above, we used the fact that the inverse of a matrix  is given by:

 
Note that the vectors T and B are generally not unit length in object space, and if there is texture distortion, they will not be orthonormal either.

The T, B, and N vectors are commonly referred to as the tangent, binormal (or bitangent), and normal vectors, respectively.

12.4 顶点切空间
在上一节中,我们得到了每个三角形的切空间。但是,如果我们用这个纹理空间做法线映射,我们将得到一个三角的外观,因为切空间是在三角形表面是不变的。因此,我们指定每个顶点的切线向量,与平滑表面而均值化顶点向量类似,我们也做向量均值化:

1. 任意一个顶点v的切向量T,是通过平均网格中共享顶点该顶点的所有三角形的切线向量得到。

2. 对于任意一个顶点v的向量B,是通过平均网格中共享顶点该顶点的所有三角形的切线向量得到。

 

均值化后,TBN通常需要正交化,使向量相互正交且单位长度。通常使用Gram-Schmidt方法处理。网络上提供了建立任意三角形的每个顶点的切空间的代码:http://www.terathon.com/code/tangent.php。

 

在我们的系统,我们将不直接在内存中存储向量B。相反,我们需要B时,我们将计算B = N×T,其中N是通常的顶点向量。因此,我们的顶点结构看起来像这样:

struct Vertex
{
    D3DXVECTOR3 POS;
    D3DXVECTOR3 tangent;
    D3DXVECTOR3 normal;
    D3DXVECTOR2 texC;
};

 

对于我们的Noram Map演示,我们将继续使用Quad,Box,Cylinder,和Sphere类。我们更新这些类使之包括每顶点的切线向量。对Quad和Box对象来说,很容易指定每个顶点的空间坐标的切线向量T(见图12.5)。对于Cylinder和Sphere,在每个顶点的切线向量T可以通过计算P/u得到,其中u也作为u-texture坐标。

 

原文:

 

12.4 Vertex Tangent Space

In the previous section, we derived a tangent space per triangle. However, if we use this texture space for normal mapping, we will get a triangulated appearance since the tangent space is constant over the face of the triangle. Therefore, we specify tangent vectors per vertex, and we do the same averaging trick that we did with vertex normals to approximate a smooth surface:

1. The tangent vector T for an arbitrary vertex v in a mesh is found by averaging the tangent vectors of every triangle in the mesh that shares the vertexv.

2. The bitangent vector B for an arbitrary vertex v in a mesh is found by averaging the bitangent vectors of every triangle in the mesh that shares the vertexv.

 

After averaging, the TBN-bases will generally need to be orthonormalized, so that the vectors are mutually orthogonal and of unit length. This is usually done using the Gram-Schmidt procedure. Code is available on the web for building a per-vertex tangent space for an arbitrary triangle mesh: http://www.terathon.com/code/tangent.php.

In our system, we will not store the bitangent vector B directly in memory. Instead, we will computeB = N×T when we need B, whereN is the usual averaged vertex normal. Hence, our vertex structure looks like this:

struct Vertex{    D3DXVECTOR3 pos;    D3DXVECTOR3 tangent;    D3DXVECTOR3 normal;    D3DXVECTOR2 texC;};

For our Normal Map demo, we will continue to use the Quad, Box, Cylinder, and Sphere classes. We have updated these classes to include a tangentvector per vertex. The object space coordinates of the tangent vectorT is easily specified at each vertex for the Quad andBox (see Figer 12.5). For the Cylinder and Sphere, the tangent vectorT at each vertex can be found by forming the vector-valued function of two variablesP(u, v) of the cylinder/sphere and computing P / u, where the parameter u is also used as theu-texture coordinate.

 

12.5 切空间和对象空间之间的转换
现在,网格中的每个顶点都已经有了一个正交TBN坐标。此外,我们也得到了相对于网格的对象空间的TBN向量的坐标。所以现在我们已经有了相对于对象空间坐标系统的TBN坐标,我们可以用矩阵把坐标从切线空间变换到对象空间:

 Normal <wbr>Mapping <wbr>(7)

由于这个矩阵是正交的,其逆矩阵是它的转置矩阵。因此,从对象空间到切空间的坐标变换矩阵是:
Normal <wbr>Mapping <wbr>(7) 

在我们的着色程序中,我们将要把法向量从切空间变换到世界空间。一种方法是首先把法线从切线空间转换到对象空间,然后使用世界矩阵从对象空间变换到世界空间:
Normal <wbr>Mapping <wbr>(7)
 
然而,由于矩阵乘法是关联的,我们可以这样做:
Normal <wbr>Mapping <wbr>(7)

并注意到

Normal <wbr>Mapping <wbr>(7)

其中T′ = T · Mworld, B′ = B · Mworld, and N′ = N · Mworld。
所以可直接从切线空间到世界空间,我们只是要在世界坐标中描述切空间,可以通过把TBN从对象空间变换到世界空间坐标来做到这点。
 
注意:我们只会转化向量(不是点)。因此,我们只需要一个3 × 3矩阵。回想一下,仿射矩阵的第四行是平移,但我们不需要平移向量。
 
原文:
12.5 Transforming between Tangent Space and Object Space
 

At this point, we have an orthonormal TBN-basis at each vertex in a mesh. Moreover, we have the coordinates of the TBN vectors relative to the object space of the mesh. So now that we have the coordinates of the TBN-basis relative to the object space coordinate system, we can transform coordinates from tangent space to object space with the matrix:

 

Since this matrix is orthogonal, its inverse is its transpose. Thus, the change of coordinate matrix from object space to tangent space is:

 

In our shader program, we will actually want to transform the normal vector from tangent space to world space for lighting. One way would be to transform the normal from tangent space to object space first, and then use the world matrix to transform from object space to world space:

 

However, since matrix multiplication is associative, we can do it like this:

 

And note that

 

where T = T · Mworld,B = B · Mworld, andN = N · Mworld. So to go from tangent space directly to world space, we just have to describe the tangentbasis in world coordinates, which can be done by transforming the TBN-basis from object space coordinates to world space coordinates.

 

 Note 

We will only be interested in transforming vectors (not points). Thus, we only need a 3×3 matrix. Recall that the fourth row of an affine matrix is for translation, but we do not translate vectors

12.6 Shader Code
我们总结法线映射的一般过程如下:

1. 通过从一些艺术或实用程序创建所需的法线图并存储在图像文件中。程序初始化时,从这些文件中创建2D纹理。

2. 对每个三角形,计算切向量T. 取得每个顶点的切向量,也就是网格中共享顶点v的每一个三角形的切向量的平均值。

3. 在顶点着色器中,把顶点法线和切线向量变换到世界坐标并输出结果到像素着色器。

4. 使用插值后的切向量和法向量,我们三角形的表面上的每个像素点上建立了TBN坐标。我们使用此坐标把采样后的法向量从切线空间变换到世界空间。然后,我们有一个世界空间的法向量,使用它我们可以做常用的光照计算。

 

原文:

12.6 Shader Code
We summarize the general process for normal mapping:

1. Create the desired normal maps from some art or utility program and store them in an image file. Create 2D textures from these files when the program is initialized.

2. For each triangle, compute the tangent vector T. Obtain a per-vertex tangent vector for each vertex v in a mesh by averaging the tangent vectors of every triangle in the mesh that shares the vertex v. (In our demo, we use simple geometry and are able to specify the tangent vectors directly, but this averaging process would need to be done if using arbitrary triangle meshes made in a 3D modeling program.)

3. In the vertex shader, transform the vertex normal and tangent vector to world space and output the results to the pixel shader.

4. Using the interpolated tangent vector and normal vector, we build the TBN-basis at each pixel point on the surface of the triangle. We use this basis to transform the sampled normal vector from the normal map from tangent space to world space. We then have a world space normal vector from the normal map to use for our usual lighting calculations.

 

The entire normal mapping effect is shown below for completeness, with the parts relevant to normal mapping in bold.

 

#include "lighthelper.fx"

cbuffer cbPerFrame
{
    Light gLight;
    float3 gEyePosW;
};

cbuffer cbPerObject
{
    float4x4 gWorld;
    float4x4 gWVP;
    float4x4 gTexMtx;
    float4 gReflectMtrl;
    bool gCubeMapEnabled;
};

// Nonnumeric values cannot be added to a cbuffer.
Texture2D gDiffuseMap;
Texture2D gSpecMap;
Texture2D gNormalMap;
TextureCube gCubeMap;

SamplerState gTriLinearSam
{
    Filter = MIN_MAG_MIP_LINEAR;
    AddressU = Wrap;
    AddressV = Wrap;
};

struct VS_IN
{
    float3 posL     : POSITION;
    float3 tangentL : TANGENT;
    float3 normalL  : NORMAL;
    float2 texC     : TEXCOORD;
};

struct VS_OUT
{
    float4 posH     : SV_POSITION;
    float3 posW     : POSITION;
    float3 tangentW : TANGENT;
    float3 normalW  : NORMAL;
    float2 texC     : TEXCOORD;
};

VS_OUT VS(VS_IN vIn)
{
    VS_OUT vOut;
    // Transform to world space space.
    vOut.posW     = mul(float4(vIn.posL, 1.0f), gWorld);
    vOut.tangentW = mul(float4(vIn.tangentL, 0.0f), gWorld);
    vOut.normalW  = mul(float4(vIn.normalL, 0.0f), gWorld);

    // Transform to homogeneous clip space.
    vOut.posH = mul(float4(vIn.posL, 1.0f), gWVP);

    // Output vertex attributes for interpolation across triangle.
    vOut.texC = mul(float4(vIn.texC, 0.0f, 1.0f), gTexMtx);

    return vOut;
}

float4 PS(VS_OUT pIn) : SV_Target
{
    float4 diffuse = gDiffuseMap.Sample( gTriLinearSam, pIn.texC );

    // Kill transparent pixels.
    clip(diffuse.a - 0.15f);

    float4 spec    = gSpecMap.Sample( gTriLinearSam, pIn.texC );
    float3 normalT = gNormalMap.Sample( gTriLinearSam, pIn.texC );

    // Map [0,1] --> [0,256]
    spec.a *= 256.0f;

    // Uncompress each component from [0,1] to [-1,1].
    normalT = 2.0f*normalT - 1.0f;
    // build orthonormal basis
    float3 N = normalize(pIn.normalW);
    float3 T = normalize(pIn.tangentW - dot(pIn.tangentW, N)*N);
    float3 B = cross(N,T);
    float3x3 TBN = float3x3(T, B, N);
    // Transform from tangent space to world space.
    float3 bumpedNormalW = normalize(mul(normalT, TBN));

    // Compute the lit color for this pixel using normal from normal map.
    SurfaceInfo v = {pIn.posW, bumpedNormalW, diffuse, spec};
    float3 litColor = ParallelLight(v, gLight, gEyePosW);

    [branch]
    if( gCubeMapEnabled )
    {
        float3 incident = pIn.posW - gEyePosW;
        float3 refW = reflect(incident, bumpedNormalW);
        float4 reflectedColor = gCubeMap.Sample(gTriLinearSam, refW);
        litColor += (gReflectMtrl*reflectedColor).rgb;
    }

    return float4(litColor, diffuse.a);
}

technique10 NormalMapTech
{
    pass P0
    {
        SetVertexShader( CompileShader( vs_4_0, VS() ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_4_0, PS() ) );
    }
}

Two lines that might not be clear are these:

float3 N = normalize(pIn.normalW);
float3 T = normalize(pIn.tangentW - dot(pIn.tangentW, N)*N);