Visibility of Point Cloud

来源:互联网 发布:利比亚知乎 编辑:程序博客网 时间:2024/04/26 18:54

Currently, the method of hidden point removal HPR (Katz et al., 2007) is widely applied for the visibility analysis. The advantage of this technique is to avoid creating a surface from the point cloud which might be expensive and this led to analyze visibility efficiently with both sparse and dense clouds. However, when the point cloud is noisy or non-uniformly sampled, a robust HPR operator (RHPR) is preferred to be used (Mehra et al., 2010) to deal with these cases.

However, the Katz’s method has false positives. There are several other methods to detect the hidden points:
* The surface triangle based methods (the normal direction testing, triangle – ray intersection, Z – buffering method)
* The voxel based techniques(voxel-ray intersection, ray tracing and Z-buffering method)
* The hidden point removal HPR

I am currently implementing the ray tracing (in an image spacing) and scan line. The idea is relatively the same except that the ray tracing(casting) casts a ray from a camera (position of view) while the scan line casts paralleled rays. The basic ideas is that for each point in the point cloud, I cast a ray through it and try to find if there is a triangle in front of it that intersects the ray. So first step is to loop all the triangles of the mesh and calculate the intersection of the ray and the triangles’ planes.
Secondly, we should check if the intersection is inside the triangle. Of course, the triangle should be in front of the point. If so, then the point is hidden.

Speaking of mesh and point cloud, we have modified the python-pcl library, so that it can generate a point cloud from a mesh, that is much denser point distribution on the mesh surface. Then it is easy to get the points’ normal information from the mesh.

The code of ray_casting can run on CPU(of course it’s gonna take years)while others requires CUDA.

This is a helper function to check if a point is inside the triangle:

def PointInTriangle(pt, v0, v1, v2):    #b = ((pt[0] - v0[0])*(v1[1] - v0[1]) - (pt[1] - v0[1]) * (v1[0] - v0[0])) / ((v2[0] - v0[0])*(v1[1]-v0[1]) - (v1[0] - v0[0])*(v2[1] - v0[1]))    #a = ((pt[0] - v0[0]) - b*(v2[0] - v0[0])) / (v1[0] - v0[0])    #return (a >= 0) & (b >=0) & (a+b <= 1)    v10 = v1 - v0    v20 = v2 - v0    area = norm(np.cross(v10,v20))    T0 = v0 - pt    T1 = v1 - pt    T2 = v2 - pt    alpha = norm(np.cross(T0,T1))/area    beta = norm(np.cross(T0,T2))/area    gamma = 1 - alpha - beta    return (0<= alpha <=1) & (0 <= beta <= 1) & (0 <= gamma <=1)

The algorithm is from the stack exchange (mathematical version of stack overflow).
这里写图片描述

def __ray_casting(self):    points = self.points    n_pts = len(points)    mask = np.zeros(n_pts)    # the orthogonal mask is just those points that are vertical to the watching direction    orthogonal_mask = abs((self.normals * self.direction[None,:]).sum(1)) < 0.01     # I perturbed the points so that the points on the side (vertical to the direction) are no longer vertical(their normals)    points = self.perturb_points_tapered(orthogonal_mask)    camera_ray = points - self.camera_position     n_tri = len(self.mesh.v0)    # The camera position is calculated somewhere in the class..     v0 = self.mesh.v0 - self.camera_position    v1 = self.mesh.v1 - self.camera_position    v2 = self.mesh.v2 - self.camera_position    normals = self.mesh.normals    for i in range(n_pts):        ray = camera_ray[i,:]        tri_idx = self.tri_indices[i]        l = range(n_tri)        l.remove(tri_idx)        for j in l:            normal = self.mesh.normals[j,:]#np.cross((v1[j,:]-v0[j,:]), (v2[j,:] - v0[j,:]))            if normal.dot(ray) == 0:                pass            else:                t = normal.dot(v0[j,:]) / normal.dot(ray)                if t > 0 and t < 1:                    T = t * ray                    mask[i] += int(PointInTriangle(T, v0[j,:], v1[j,:], v2[j,:]))    self._hidden_mask = (mask > 0)         self._side_mask = ~self._hidden_mask & orthogonal_mask
0 0
原创粉丝点击