3D Features, Surface Normals, Local Descriptors and KdTree

来源:互联网 发布:ubuntu 登录samba 编辑:程序博客网 时间:2024/06/05 20:17

Concepts of Point Cloud from Point Cloud Library:


#####################################################################################


3D Feature


######################################################################################


In their native representation, points as defined in the concept of 3D mapping systems are simply represented using their Cartesian coordinates x, y, z, with respect to a given origin. Applications which need to compare points for various reasons require better characteristics and metrics to be able to distinguish between geometric surfaces. The concept of a 3D point as a singular entity with Cartesian coordinates therefore disappears, and a new concept, that of local descriptor takes its place.

The keypoints library contains implementations of two point cloud keypoint detection algorithms. Keypoints are points in an image or point cloud that are stable, distinctive, and can be identified using a well-defined detection criterion. Typically, the number of interest points in a point cloud will be much smaller than the total number of points in the cloud, and when used in combination with local feature descriptors at each keypoint,the keypoints and descriptors can be used to form a compact—yet descriptive—representation of the original data.


#######################################################################################


KdTree


#######################################################################################


In general, PCL features use approximate methods to compute the nearest neighbors of a query point, using fast kd-tree queries for finding the K nearest neighbors of a specific point or location. Each level of a k-d tree splits all children along a specific dimension, using a hyperplane that is perpendicular to the corresponding axis. At the root of the tree all children will be split based on the first dimension (i.e. if the first dimension coordinate is less than the root it will be in the left-sub tree and if it is greater than the root it will obviously be in the right sub-tree). Each level down in the tree divides on the next dimension, returning to the first dimension once all others have been exhausted.

There are two types of queries that we’re interested in:

  • determine the k (user given parameter) neighbors of a query point (also known ask-search);
  • determine all the neighbors of a query point within a sphere of radiusr (also known asradius-search).

##########################################################################################


Normal Estimation


##########################################################################################

Given a geometric surface, it’s usually trivial to infer the direction of thenormal at a certain point on the surface as the vector perpendicular to thesurface in that point. However, since the point cloud datasets that we acquirerepresent a set of point samples on the real surface, there are twopossibilities:

  • obtain the underlying surface from the acquired point cloud dataset, usingsurface meshing techniques, and then compute the surface normals from themesh;
  • use approximations to infer the surface normals from the point cloud datasetdirectly.
The problem of determining the normal to a point on the surface is approximated by the problem of estimating the normal of a plane tangent to thesurface, which in turn becomes a least-square plane fitting estimation problem.

The solution for estimating the surface normal is therefore reduced to ananalysis of the eigenvectors and eigenvalues (or PCA – Principal ComponentAnalysis) of a covariance matrix created from the nearest neighbors of thequery point. More specifically, for each point \boldsymbol{p}_i, weassemble the covariance matrix\mathcal{C} as follows:

\mathcal{C} = \frac{1}{k}\sum_{i=1}^{k}{\cdot (\boldsymbol{p}_i-\overline{\boldsymbol{p}})\cdot(\boldsymbol{p}_i-\overline{\boldsymbol{p}})^{T}}, ~\mathcal{C} \cdot \vec{{\mathsf v}_j} = \lambda_j \cdot \vec{{\mathsf v}_j},~ j \in \{0, 1, 2\}

Where k is the number of point neighbors consideredin theneighborhood of\boldsymbol{p}_i,\overline{\boldsymbol{p}}represents the 3D centroid of the nearest neighbors,\lambda_j is thej-th eigenvalue of the covariance matrix, and \vec{{\mathsf v}_j}thej-th eigenvector.

To estimate a covariance matrix from a set of points in PCL, you can use:

The actual compute call from the NormalEstimation class does nothing internally but:

for each point p in cloud P  1. get the nearest neighbors of p  2. compute the surface normal n of p  3. check if n is consistently oriented towards the viewpoint and flip otherwise

#include <pcl/point_types.h>#include <pcl/features/normal_3d.h>{  pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);  ... read, pass in or create a point cloud ...  // Create the normal estimation class, and pass the input dataset to it  pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;  ne.setInputCloud (cloud);  // Create an empty kdtree representation, and pass it to the normal estimation object.  // Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).  pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ> ());  ne.setSearchMethod (tree);  // Output datasets  pcl::PointCloud<pcl::Normal>::Ptr cloud_normals (new pcl::PointCloud<pcl::Normal>);  // Use all neighbors in a sphere of radius 3cm  ne.setRadiusSearch (0.03);  // Compute the features  ne.compute (*cloud_normals);  // cloud_normals->points.size () should have the same size as the input cloud->points.size ()*}



############################################################################################


Local Descriptor


############################################################################################


As point feature representations go, surface normals and curvature estimates are somewhat basic in their representations of the geometry around a specific point. Though extremely fast and easy to compute, they cannot capture too much detail, as they approximate the geometry of a point’s k-neighborhood with only a few values. As a direct consequence, most scenes will contain many points with the same or very similar feature values, thus reducing their informative characteristics.