Hidden Surface Removal / Visible Surface Determination

来源:互联网 发布:安溪人很多做网络赌盘 编辑:程序博客网 时间:2024/06/05 22:39

Recently I am working as a data scientist at a start-up company in Washington D.C. metro area. Our company is doing new generation manufacturing which involves CNC machining and 3-D Printing.

Customers upload CAD files of the parts they want to build and we (develop team) wrote an engine analyzing the geometry features of the parts and using those geometry variables to predict the quote pricing(which also involves machine learning at certain point).

One big issue is to find parts that contains unmachinable corners (those corners with an infinite radius) and hidden surfaces (surfaces cannot be reached by either the bottom or the side of the machining tools).

To find those unmachinable features, we have to find invisible sides from all the directions. So hidden surface removal is playing an important role.

For the need of quick implement, we referred to the paper: Direct Visibility of Points Sets, which is really a quick and dirty way and works for most of the time. However,
this method would result in false positives at inner corner regions so are “diminished” by the visible and side points by threshold.

So I am currently implement other algorithms which enables the GPU working, such as simple ray tracing which is also called ray casting and z buffering. Here I wanna make a summary on all these visible surface determination algorithms I have learned about recently.

First of all, there are generally two types of approaches:

  • Object precision (space)
    Algorithms do their work on the objects themselves before they are converted to pixels in the frame buffer. The resolution of the display device is irrelevant here as this calculation is done at the mathematical level of the objects

Seudocode:

for each object a in the scene    determine which parts of object a are visible    (involves comparing the polyons in object a to other polygons in a    and to polygons in every other object in the scene)
  • Image precision (space)
    Algorithms do their work as the objects are being converted to pixels in the frame buffer. The resolution of the display device is important here as this is done on a pixel by pixel basis.

Seudocode:

for each pixel in the frame buffer    determine which polygon is closest to the viewer at that pixel location    colour the pixel with the colour of that polygon at that location

In most game development cases, the visible surface determination uses the method of image precision which relies on the pixels. However, I think in our manufacturing purpose implementation, we have to do it in object precision.

Object VS. Image Precision

Object Space Algorithms

  • Operate on geometric primitives
    • For each object in the scene, compute the part of it which isn’t obscured by any other object
    • Must perform tests at high precision
    • Resulting information is resolution-independent
  • Complexity

    • Must compare every pair of objects, so O(n2) for n objects
    • For an mm display, have to fill in colors for m2 pixels
    • Overall complexity can be O(kobjn2+kdispm2)
    • Best for scenes with few polygons or resolution-independent output
  • Implementation

    • Difficult to implement
    • Must carefully control numerical error

Image Space Algorithms

  • Operate on pixels
    • For each pixels in the scene, find the object closest to the COP which intersects the projector through that pixel, them draw
    • Perform tests at device resolution, result works only for that resolution
  • Complexity
    • Naive approach checks all n objects at every pixel, then O(nm2)
    • Better approaches checks only the objects that could be visible at each pixel. Let’s say, on average, d objects are visible at each pixel, (a.k.a depth complexity). Then O(dm2)
  • Implementation
    • Usually very simple
    • Used a lot in practice

Horizon Line Algorithm

  • An implicit function y = f(x,z) represents as 2D array of x and z values in which each entry is a y-value
  • Surface composed by polylines. Each polyline is constant in z

Algorithm

  1. Draw polylines of constant z from front(near) to back(far)
  2. Draw only parts of polyline that are visible: i.e. above/blow the silhouette(horizon)

Implementation

Use two 1D arrays YMAX and YMIN(with 1 entry for each X). When drawing a polyline of constant z, for each x value, test if above/below YMAX/YMIN (at x location) and update arrays.

Example:

OLD YMAX: 30 28 26 25 24 29 34 33 32 34 36 33 30
OLD YMIN: 10 12 14 15 16 15 14 13 12 12 12 13 14
Polyline: 36 34 32 26 20 22 24 16 8 7 6 21 36

New YMAX: 36 34 32 26 24 29 34 33 32 34 36 33 36
New YMIN: 10 12 14 15 16 15 14 13 8 7 6 13 14

Characteristics:

  • Applied in image space(image precision)
  • Limited to explicit functions only
  • Expoliting edge coherence
  • Applicable to free-form surfaces

Back Face Detection

  • Observation:
    In a volumetric object, we don’t see the “back” faces of the object(self occlusion)
    • Plane Equation: Ax+By+Cz+D=0
    • N=[A,B,C]T is the plane normal
    • N points “Outside”
  • Back facing and front facing faces can be identified by using the sign of VN
    • Three possibilities:
      • VN>0 back face
      • VN<0 front face
      • VN=0 on line of view

这里写图片描述
* Back face detection is easily applied to convex polyhedral objects
* For convex objects back face detection actually solves the visible surfaces problem
* In a general object, a front face can be visible, invisible or partially visible
这里写图片描述

Quantitative Invisibility

  • Definitions
    • Every edge is associated to a non-negative value Qv called quantitative invisibility
      • An active edge is a silhouette edge, i.e, an edge shared by back and front faces
      • The visibility of an edge can be changed only where it intersects another active edge in the viewing plane
      • If the edge does not intersect any active edge its visibility is homogeneous
    • Qv which corresponds to the number of times the edge is obscured
    • If Qv=0 the edge is visible

Algorithm

  1. Select a single point on line(seed) and test how many polygons obscure it (with a brute force algorithm)
  2. Increment/decrement Qv any time the line intersects an active edge, and the intersection is inside the view triangle
  3. Propagate from the end point to a neighboring lien
  4. Fill the resulting polygons appropriately

    • Problem: How do we know if the line “enters” or “leaves” an obscuring face
    • Solution: If edges of a polygon are described clockwise when viewing the object from “outside”, we can test the line direction with the direction of the intersecting edge describing the front face

这里写图片描述

Depth-Buffer Method (Z-Buffer)

  • In addition to the frame buffer (that keeps the pixel values), keep a Z-buffer containing the depth value of each pixel
  • Surfaces are scan-converted in an arbitrary order. For each pixel (x,y), the Z-value is computed as well. The pixel (x,y) is overwritten only if it is closer to the viewing plane than the pixel already written at the same location.

Algorithm

  1. Initialize the z-buffer depth and the frame buffer I:
    depth(x,y)=MAXZ;I(X,Y)=Ibackground
  2. Calculate the depth z for each (x, y) position on any surface:
    If z<depth, then depth(x,y)=z and I(x,y)=Isurf(x,y)
for each pixel in p_i:{    z_buffer[p_i] = FAR    Fb[p_i] = BACKGROUND_COLOR}for each polygon P{    for each pixel p_i in the projection of P    {        Compute depth z and shade s of P at p_i        if z < z_buffer[p_i]        {            z_buffer[p_i] = z            Fb[p_i] = s        }    }}

Characteristics

  • Very simple implementation in the case of polygon surfaces. Uses polygon scan line conversion, and exploits face coherence and scan-line coherence:
    • z=(Ax+By+D)/C
    • Alone scan lines
      z=(A(x+1)+By+D)/C=zA/C
    • Between successive scan lines:
      z=(Ax+B(y+1)+D)/C=zB/C
  • Implemented in the image space
  • Very common in hardware due its simplicity
  • 32 bits per pixel for Z is common
  • Advantages:
    • Simple and easy to implement
    • Buffer may be saved with image for re-processing
  • Disadvantages:
    • Requires a lot of memory
    • Finite depth precision can cause problems
    • Spends time while rendering polygons that are not visible
    • Requires re-calculations when change the scale

Ray Casting

Algorithm

  1. Partition the projection plane into pixels to match screen resolution
  2. For each pixel pi, construct ray from COP through PP at that pixel and into scene

    • Parameterize each ray:
      r(t)=c+t(pijc)
    • Each object Oi returns ti>1 such that first intersection with Oi occurs at r(tj).
  3. Intersect the ray with every object in the scene

  4. Color the pixel according to the object with the closest intersection

这里写图片描述
Definition
An algorithm exhibits coherence if it uses knowledge about the continuity of the objects on which it operates
An online algorithm is one that doesn’t need all the data to be present when it starts running.

References:
https://www.cs.uic.edu/~jbell/CourseNotes/ComputerGraphics/VisibleSurfaces.html
https://courses.cs.washington.edu/courses/csep557/03au/lectures/hidden-surfaces.pdf

0 0