Picking: Screen to Projection Window Transform

来源:互联网 发布:肛门调教知乎 编辑:程序博客网 时间:2024/05/16 10:31

From 《Introduction to 3D Game Programming with DirectX 9.0c Shader Approach》.

 

The first task is to transform the clicked screen point to normalized device coordinates (see§6.4.4.3). The viewport matrix, which transforms vertices from normalized device coordinates to screen space, is given below:

Image from book

Here, the variables of the viewport matrix refer to the D3DVIEWPORT9 structure:

typedef struct _D3DVIEWPORT9 {          DWORD X;          DWORD Y;          DWORD Width;          DWORD Height;          float MinZ;          float MaxZ;} D3DVIEWPORT9;

Generally, for a game, the viewport is the entire back buffer and the depth buffer range is 0 to 1. Thus,X = 0,Y = 0,MinZ = 0, MaxZ = 1, Width = w, andHeight = h, wherew andh are the width and height of the back buffer, respectively. Assuming this is indeed the case, the viewport matrix simplifies to:

Image from book

Now let Image from book = (ux, uy, uz, 1) be a point in normalized device space (i.e.,-1 ux 1, -1uy 1, and 0 uz 1). TransformingImage from book to screen space yields:

Image from book

The coordinate uz is just used by the depth buffer; we are not concerned with any depth coordinates for picking. The 2D screen pointImage from book corresponding toImage from book is just the transformedx- andy-coordinates:

Image from book

The above equation gives us the screen point Image from book in terms of the normalized device pointImage from book and the viewport dimensions. However, in our picking situation, we are initially given the screen pointImage from book and the viewport dimensions, and we want to findImage from book. Solving the above equations forImage from book yields:

Image from book

We now have the clicked point in normalized space. However, we really want the clicked point on the projection window on the near plane in view space. Therefore, we ask ourselves which transformation took us from the projection window on the near plane in view space to normalized space, and then we just invert that transformation to go from normalized space to view space. Well, if you recall§6.4.4.3, we transformed projected vertices from the projection window on the near plane to normalized space by dividing thex-coordinate bynR tan (α/2) and dividing they -coordinate byn tan (α/2), where a is the vertical field of view angle of the frustum andn is the near plane distance. Therefore, to transformImage from book from normalized space to the projection window on the near plane in view space, we multiply thex-coordinate ofImage from book bynR tan (α/2), and we multiply they -coordinate of Image from book byn tan (α/2). We now have the projected pointImage from book = (nR tan(α/2)ux,n tan(α/2)uy,n) on the projection window on the near plane in view space. Shooting a ray through this point gives us the ray we want; however, we can simplify the math a bit.

Consider Figure 20.4 (we show only they -axis situation, but thex-axis situation is analogous). Observe that the picking ray also passes throughy atz = 1. Using similar triangles, we have:

(20.1) Image from book
Image from book
Figure 20.4: Similar triangles. Shooting a ray throughy atz = 1 is the same as shooting a ray throughn tan (α/2)uy atz = n.

Similarly, for the x-coordinate, we have:

(20.2) Image from book

In other words, instead of shooting the ray through the point Image from book= (nR tan(α/2)ux, n tan(α/2)uy,n), we can just shoot it through the point:

(20.3) Image from book

To reiterate, we can do this because, as seen in Figure 20.4, shooting a ray through Image from book gives exactly the same ray as if we shot it throughImage from book.Equation 20.3 has the advantage of being independent ofn and requiring fewer multiplications.

Now recall the perspective projection matrix from §6.4.4.5. Observe that R tan (α/2) = l/P00 and tan (α/2) = 1/P11, where P is the projection matrix. Hence, we can rewriteEquation 20.3 as follows:

(20.4) Image from book

The code that computes the picking ray in view space is given below:

void TriPickDemo::getWorldPickingRay(D3DXVECTOR3& originW,                                     D3DXVECTOR3& dirW){      // Get the clicked screen point.      POINT s;      GetCursorPos(&s);      // Make it relative to the client area window.      ScreenToClient(mhMainWnd, &s);      // By the way we've been constructing things, the entire      // back buffer is the viewport.      float w = (float)md3dPP.BackBufferWidth;      float h = (float)md3dPP.BackBufferHeight;      D3DXMATRIX proj = gCamera->proj();      float x = (2.0f*s.x/w - 1.0f) / proj(0,0);      float y = (-2.0f*s.y/h + 1.0f) / proj(1,1);      // Build picking ray in view space.      D3DXVECTOR3 origin(0.0f, 0.0f, 0.0f);      D3DXVECTOR3 dir(x, y, 1.0f);