Perception(1.2)

来源:互联网 发布:东莞房价知乎 编辑:程序博客网 时间:2024/06/06 04:42
4.1.2 Definition of Coordinate Systems

The global coordinate system is described by its origin lying at the center of the field, the x-axis pointing toward the opponent goal, the y-axis pointing to the left, and the z-axis pointing upward. Rotations are specified counter-clockwise with the x-axis pointing toward 0° , and the y-axis pointing toward 90°. In the robot-relative system of coordinates , the axes are defined as follows: the x-axis points forward, the y-axis points to the left, and the z-axis points upward.

全局坐标系,以起点在场地中心而描述,X轴指向对手球门,y轴指向左边,Z轴指向上方。逆时针旋转指定x轴指向0°,y轴指向90°。相对坐标系,轴定义如下:X轴指向前,Y轴指向左,Z轴指向上方。

4.1.2.1 Camera Matrix and Camera Calibration

相机矩阵与摄像机标定

The CameraMatrix is a representation containing the transformation matrix of the active camera of the NAO (cf. Sect. 4.1.1) that is provided by the CameraMatrixProvider. It is used for projecting objects onto the field as well as for the creation of the ImageCoordinateSystem (cf. Sect. 4.1.2.2). It is computed based on the TorsoMatrix that represents the orientation and position of a specific point within the robot's torso relative to the ground (cf. Sect. 7.4). Using the RobotDimensions and the current joint angles, the transformation of the camera matrix relative to the torso matrix is computed as the RobotCameraMatrix. The latter is used to compute the BodyContour (cf. Sect. 4.1.3). In addition to the fixed parameters from the RobotDimensions,

CameraMatrix representation 含有由CameraMatrixProvider提供的NAO的活动相机的变换矩阵(参见4.1.1)。它是用于投影物体到场地以及对ImageCoordinateSystem 的创建(参见4.1.2.2)。它的计算是根据TorsoMatrix (表示 相对地面,定位在机器人躯干一个特定点的方向与位置(参见第。7.4))。使用RobotDimensions(机器人尺寸?)和当前的关节角度,对于躯干矩阵变换的摄像机矩阵被计算作为RobotCameraMatrix。后者是用于计算体轮廓(参见4.1.3)。除了来自RobotDimensions的固定参数。

some robot-specific parameters from the CameraCalibration are integrated, which are necessary, because the camera cannot be mounted perfectly plain and the torso is not always perfectly vertical. A small variation in the camera's orientation can lead to significant errors when projecting farther objects onto the field. The process of manually calibrating the robot-specific correction parameters for a camera is a very time-consuming task, since the parameter space is quite large (8 resp. 11 parameters for calibrating the lower resp. both cameras). It is not always obvious, which parameters have to be adapted, if a camera is miscalibrated.

来自CameraCalibration的一些机器人具体参数被整合,这是必要的,因为相机无法安装得完美,和躯干并不总是完全垂直。当更远的对象投射到场地时,相机方向的微小变化可能会导致明显错误。手动校准机器人具体的相机校正参数,是一个非常耗时的任务,由于参数空间相当大(8 resp. 11 parameters for calibrating the lower resp. both cameras???????)。它并不总是显而易见的,如果相机校准失误,参数必须调整。

In particular during competitions, the robots' cameras require recalibration very often, e.g. after a robot returned from repair. In order to overcome this problem, an automatic CameraCalibrator module was introduced. It collects points on the field lines fully autonomously. The points on the lines required are provided by the LineSpotProvider (see Sect. 4.2.1). They are collected for both cameras, after the head moved to predefined angles. Although this calibrator significantly reduces the time needed for a calibration, it has a drawback in terms of precision. The line detection has problems in detecting lines that are further away, in particular when the color calibration is not very good. This can lead to inaccurate values for the estimated tilts of the cameras.

特别是在比赛中,机器人的摄像机需要重新校准(例如,在机器人返修后)。为了克服这个问题,介绍了一个自动CameraCalibrator模块。它充分收集现场线上的点。所需的线点由LineSpotProvider提供(见4.2.1)。在头部移动到预定义的角度后,它们被两个摄像头采集。虽然这种校准器可以显着降低校准所需时间,但是它在精度方面有着不足。在检测较远处线的时候有问题,特别是当颜色校准不是很好的时候。这会导致摄像机的倾斜值估计不准确。

Notable features of the automatic CameraCalibrator are:(全自动CameraCalibrator 显著特征) * The user can mark arbitrary points on field lines if the automatic detection does not produce enough points. This is particularly useful during competitions because it is possible to calibrate the camera if parts of the field lines are covered (e.g. by robots or other team members).

如果自动检测不能产生足够的点,用户可以在场线上标记任意点。这是尤其在比赛期间特别有用,因为如果场线部分被覆盖的话(例如由机器人或其他团队成员),它是可以校准相机的。

  • Since both cameras are used, the calibration module is able to calibrate the parameters of the lower as well as the upper camera. Therefore, the user simply has to mark additional reference points in the image of the upper camera.

由于使用两个摄像机,校准模块是能够校准上下摄像头参数的。因此,用户只需在上相机的图像中标记额外的参考点即可。

  • In order to optimize the parameters, the Gauss-Newton algorithm is used 1 instead of hill climbing. Since this algorithm is designed specific for non-linear least squares problems like this, the time to converge is drastically reduced to an average of 5–10 iterations. This has the additional advantage that the probability to converge is increased.

为了优化参数,Gauss-Newton算法(高斯—牛顿迭代法)用1代替爬山算法。由于该算法是专为非线性最小二乘问题,这样,收敛的时间大大减少到平均5 - 10次迭代。这有收敛的概率增加的额外优势.

  • During the calibration procedure, the robot stands on a defined spot on the field. Since the user is typically unable to place the robot exactly on that spot and a small variance of the robot pose from its desired pose results in a large systematical error, additional correction parameters for the RobotPose are introduced and optimized simultaneously.

在校准过程中,机器人站在场中一个已定义的位置。由于用户通常无法把机器人完全在该点,位姿上一个小的误差,会导致一个大的系统误差,对于RobotPose的附加校正参数,被引进以优化。

  • The error function takes the distance of a point to the next line in image coordinates instead of field coordinates into account. This is a more accurate error approximation because the parameters and the error are in angular space.

误差函数考虑到一个点到图像坐标中的下一行的距离,而不是场坐标。这是一个更精确的误差近似,因为参数和误差是在角空间内的。

  • A manual deletion of samples is possible by left-clicking into the image and on the point the sample has been taken. Likewise, a manual insertion of samples is now possible by CTRL + left-clicking into the image at the point you want the sample to be.

手动删除样本是可能的,通过左击图像上已采取样本的那点。同样的,一个样本的手动插入,现在可以通过Ctrl+左击在图像上目标样本所在的那一点

  • Command generation for correcting the body rotation. In case you don’t want the BodyRotationCorrection stored in the CameraCalibration, you can manually call the JointCalibrator and transfer the values or you can use the command set module:AutomaticCameraCalibrator:setJointOffsets true before running the optimization. After the optimization, a bunch of commands will be generated and you can enter them in order of appearance to transfer the values into the JointCalibration.

用于校正身体旋转的命令生成。如果你不想保存在CameraCalibration里的BodyRotationCorrection,你可以手动调用JointCalibrator,并转移值,或者,你可以运行优化前使用命令,设置模块:AutomaticCameraCalibrator:setJointOffsets 为true。优化后,一系列命令将生成,并且你可以按表面的顺序,输入它们,来把值转移到JointCalibration。

With these features, the module typically produces a parameter set that requires only little manual adjustments, if any. The calibration procedure is described in Sect. 2.8.3.

有了这些功能,模块通常会产生一个参数集,如果需要时,只需要很少的手动调整。校准过程中描述的见2.8.3。

0 0
原创粉丝点击