在robot上同时使用kinect和hokuyo(使用多个扫描传感器)

来源:互联网 发布:矩阵式led大灯 编辑:程序博客网 时间:2024/05/18 00:40

ros版本:indigo   robot型号:kobuki

在ros中的navigation例子中的gmapping和amcl都是支持3dsenser,其中就包括kinect node,运行例子可以直接连接上kinect。而hokuyo也有现成的node,hokuyo_node。同时连接多个传感器,最重要要明白理解robot navigation运行的架构。

navigation中负责导航,避障的node是move_base,负责建图和定位的是slam_gmapping,amcl也负责定位(蒙特卡罗定位法),负责数据接收的是sensers,如果需要使用已制作好的地图需要启动map_server node。这样在去看那些demo launch就很容易了。

有点扯远了,回到主题。对于扫描传感器的node,首先要知道他们都做了什么事情,当设备连接上pc时会出现设备节点,pc是否已加载了相应设备驱动,如果加载成功,例如用usb连接设备,用命令lsusb查看有无该设备的vid和pid,然后看是否生成对应的设备节点,例如hokuyo对应/dev/ttyACM0。最后node节点所干的事情就是循环读取设备节点/dev/ttyxxx里面的数据,转换成laserscan或者pointcloud类型发布到主题上,例如kinect和hokuyo node都是将数据转化成laserscan发布到/scan主题上。

明白scan node干了什么事情,下一步需要干什么就清楚了。

启动launch不要再选择xxxx.demo.launch,将所有需要的node分开启动,需要那个启动那个。我这里拆开后起名都叫做xxx_only.lauch。例如:

move_base_only.launch<launch>  <!-- Move base -->  <include file="$(find turtlebot_navigation)/launch/includes/move_base.launch.xml"/></launch>
gmapping_only.launch<launch>  <!-- Gmapping -->  <arg name="custom_gmapping_launch_file" default="$(find turtlebot_navigation)/launch/includes/gmapping/gmapping.launch.xml"/>  <include file="$(arg custom_gmapping_launch_file)"/></launch>
amcl_only.launch<launch>  <!-- Map server -->  <arg name="map_file" default="$(env TURTLEBOT_MAP_FILE)"/>  <node name="map_server" pkg="map_server" type="map_server" args="$(arg map_file)" />    <!-- AMCL -->  <arg name="custom_amcl_launch_file" default="$(find turtlebot_navigation)/launch/includes/amcl/amcl.launch.xml"/>  <arg name="initial_pose_x" default="0.0"/> <!-- Use 17.0 for willow's map in simulation -->  <arg name="initial_pose_y" default="0.0"/> <!-- Use 17.0 for willow's map in simulation -->  <arg name="initial_pose_a" default="0.0"/>  <include file="$(arg custom_amcl_launch_file)">    <arg name="initial_pose_x" value="$(arg initial_pose_x)"/>    <arg name="initial_pose_y" value="$(arg initial_pose_y)"/>    <arg name="initial_pose_a" value="$(arg initial_pose_a)"/>  </include></launch>
kinect_only.launch<launch>  <!-- 3D sensor -->  <arg name="3d_sensor" default="$(env TURTLEBOT_3D_SENSOR)"/>  <!-- r200, kinect, asus_xtion_pro -->  <include file="$(find turtlebot_bringup)/launch/3dsensor.launch">    <arg name="rgb_processing" value="false" />    <arg name="depth_registration" value="false" />    <arg name="depth_processing" value="false" />        <!-- We must specify an absolute topic name because if not it will be prefixed by "$(arg camera)".         Probably is a bug in the nodelet manager: https://github.com/ros/nodelet_core/issues/7 -->     <arg name="scan_topic" value="/scan" />  </include></launch>
hokuyo_only.launch <launch>  <!-- laser driver -->  <include file="$(find turtlebot_navigation)/laser/driver/hokuyo_laser.launch" /></launch>

这样来说,进行多种配合,而不需要每次都修改demo.launch。唯一的弊端就是需要开启多个ter窗口

那么接下来,同时增加两个scan senser,那么在/scan上就会有冲突,冲突点:

冲突点1: 由于建图是2D的,只选取一个平面上的扫描结果,就需要选择kinect或者hokuyo,将另一个需要发布的topic从/scan改成/scan_xxx(名字可以随便起)

举例,将kinect发布的topic修改,需要修改两处

第一处:kinect_only.launch

<launch>  <!-- 3D sensor -->  <arg name="3d_sensor" default="$(env TURTLEBOT_3D_SENSOR)"/>  <!-- r200, kinect, asus_xtion_pro -->  <include file="$(find turtlebot_bringup)/launch/3dsensor.launch">    <arg name="rgb_processing" value="false" />    <arg name="depth_registration" value="false" />    <arg name="depth_processing" value="false" />        <!-- We must specify an absolute topic name because if not it will be prefixed by "$(arg camera)".         Probably is a bug in the nodelet manager: https://github.com/ros/nodelet_core/issues/7 -->     <arg name="scan_topic" value="/scan_kinect" />  </include></launch>

重点是<arg name="scan_topic" value="/scan_kinect" />

第二处:3dsensor.launch

<!-- Laserscan topic -->  <arg name="scan_topic" default="scan_kinect"/>

同理,将hokuyo的修改类似,如果launch文件中没有相应参数,则需要增加
<remap from="/scan" to="scan_hokuyo"/>


冲突点2:在避障时move_base node的costmap中要增加障碍来源

costmap 默认扫描数据来自/scan,增加一个sensor需要在障碍配置过程中增加一个来源。新增加一个 scan_kinect,然后costmap在扫描障碍时就可以同时接收两个scan sensor的障碍数据了。

见文件turtlebot_navigation/param/costmap_common_params.yaml

max_obstacle_height: 0.60  # assume something like an arm is mounted on top of the robot# Obstacle Cost Shaping (http://wiki.ros.org/costmap_2d/hydro/inflation)robot_radius: 0.20  # distance a circular robot should be clear of the obstacle (kobuki: 0.18)# footprint: [[x0, y0], [x1, y1], ... [xn, yn]]  # if the robot is not circularmap_type: voxelobstacle_layer:  #障碍层  enabled:              true  max_obstacle_height:  0.6  origin_z:             0.0  z_resolution:         0.2  z_voxels:             2  unknown_threshold:    15  mark_threshold:       0  combination_method:   1  track_unknown_space:  true    #true needed for disabling global path planning through unknown space  obstacle_range: 2.5  raytrace_range: 3.0  origin_z: 0.0  z_resolution: 0.2  z_voxels: 2  publish_voxel_map: false  observation_sources:  scan bump scan_kinect #增加一个障碍扫描源  scan:    data_type: LaserScan    topic: scan    marking: true    clearing: true    min_obstacle_height: 0.25  #需修改,取决与sensor的实际高度    max_obstacle_height: 0.35  #需修改,取决与sensor的实际高度  bump:    data_type: PointCloud2    topic: mobile_base/sensors/bumper_pointcloud    marking: true    clearing: false    min_obstacle_height: 0.0    max_obstacle_height: 0.15  scan_kinect:
    data_type: LaserScan    topic: /scan_kinect    marking: true    clearing: true    min_obstacle_height: 0.x  #需修改,取决与sensor的实际高度    max_obstacle_height: 0.xx #
# for debugging only, let's you see the entire voxel grid #cost_scaling_factor and inflation_radius were now moved to the inflation_layer nsinflation_layer: enabled: true cost_scaling_factor: 5.0 # exponential rate at which the obstacle cost drops off (default: 10) inflation_radius: 0.5 # max. distance from an obstacle at which costs are incurred for planning paths.static_layer: enabled: true
 
阅读全文
0 0
原创粉丝点击