Test multipath feature by openstack lioadm (by quqi99)

来源:互联网 发布:对数据进行分类汇总 编辑:程序博客网 时间:2024/05/16 00:39

版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 (http://blog.csdn.net/quqi99)

问题

之前写过一篇关于使用tgtadm测试OpenStack Multipath特性的文章。tgt是一个用户态的iscsi target,lio是内核核的iscsi target并且它已经被集成到了Linux内核。
在OpenStack Icehouse版本,由于仅支持单target (即cinder还不支持iscsi_secondary_ip_addresses选项用于配置第二个target),所以基于Icehouse的tgtadm将无法支持multipath。但使用lioadm改点配置可以支持。

基于OpenStack环境

搭建一个简单带cinder的openstack环境即可,不需要ceph支持。可参考这篇文章的一小部分安装。

cinder节点安装LIO-target与加载target_core_mod模块

先加载target_core_mod模块

juju ssh cinder/0sudo apt install linux-image-extra-$(uname -r)  #Avoid the error 'Module target_core_mod not found'sudo apt build-dep linux-image-$(uname -r)sudo modprobe target_core_mod

cinder节点上需要安装LIO-target

#Install LIO-target - https://www.thomas-krenn.com/de/wiki/Linux-IO_Target_(LIO)_unter_Ubuntu_14.04sudo apt install open-iscsi targetcli python-urwid lio-utils python-pyparsing python-prettytable python-rtslib python-configshellsudo pip install 'rtslib-fb>=2.1.39'

需要说明的是,这里面有一个bug,因为Icehouse版本 /usr/bin/cinder-rtstool文件中使用了rtslib-fb>=2.1.39,所以我们必须使用“sudo pip install ‘rtslib-fb>=2.1.39’”命令安装它。但是rtslib-fb这个模块比较老又被废弃了,targetcli工具又使用了比较新的rtslib模块。
如果我们删除rtslib-fb模块(sudo pip uninstall y rtslib-fb)而使用rtslib模块targetcli工具将会恢复正常,但是/usr/bin/cinder-rtstool工具在运行下列命令时会报错:’ImportError: No module named rtslib_fb’

sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool create /dev/cinder-volumes/volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea zAXMzsNKJ4kBvDYCZBec VyhWebHq3GBKE22zYjpX

幸好,/usr/bin/cinder-rtstool里没有使用targetcli,所以我们必须使用‘rtslib-fb>=2.1.39’模块。
下面是正常使用targetcli工具的例子:

root@juju-c0c753-trusty-icehouse-0:~# targetclitargetcli GIT_VERSION (rtslib GIT_VERSION)Copyright (c) 2011-2013 by Datera, Inc.All rights reserved./> lso- / ............................................................................................................... [...]  o- backstores .................................................................................................... [...]  | o- fileio ......................................................................................... [0 Storage Object]  | o- iblock ......................................................................................... [0 Storage Object]  | o- pscsi .......................................................................................... [0 Storage Object]  | o- rd_dr .......................................................................................... [0 Storage Object]  | o- rd_mcp ......................................................................................... [0 Storage Object]  o- ib_srpt ................................................................................................. [0 Targets]  o- iscsi ................................................................................................... [0 Targets]  o- loopback ................................................................................................ [0 Targets]  o- qla2xxx ................................................................................................. [0 Targets]  o- tcm_fc .................................................................................................. [0 Targets]/>

Cinder节点继续修改

Icehouse的lioadm默认也是只支持一个iscsi target的,要想支持多个,需要做如下的修改,让一个cinder节点(10.5.0.22)的两个端口(3260, 3261)去做multipath.

sudo sed -i 's/import rtslib/import rtslib_fb as rtslib/g' /usr/bin/cinder-rtstoolsudo sed -i 's/if target == None:/if not target:/g' /usr/bin/cinder-rtstool# For this step, you must replace IPADDRESS with the actual IP of the cinder/0 node.sudo sed -i "s/rtslib.NetworkPortal(tpg_new, '0.0.0.0', 3260, mode='any')/rtslib.NetworkPortal(tpg_new, '10.5.0.22', 3260, mode='any')\n\t rtslib.NetworkPortal(tpg_new, '10.5.0.22', 3261, mode='any')/g" /usr/bin/cinder-rtstool

通过charm变更使用lioadm,当然也可以手工修改

http_proxy=http://squid.internal:3128 git clone https://github.com/openstack/charm-cinder.gitcd charm-cindersed -i 's/tgtadm/lioadm/g' templates/icehouse/cinder.confjuju upgrade-charm cinder --path $PWD

停止tgt服务

juju ssh cinder/0 sudo service tgt stop

使用lioadm时,不需要像tgtadm那样在计算节点的nova.conf中配置下列参考支持multipath:

[libvirt]iscsi_use_multipath = True

使用windows镜像

我们使用windows镜像测试:

source ~/novarc && glance image-download --file windows2012R2_virtio.raw --progress 31dd4e9f-ccd3-4c57-b10e-6b5e99366240source novarc && glance image-create --name windows2012R2 --file /bak/windows2012R2_virtio.raw --visibility public --progress --container-format bare --disk-format rawnova keypair-add --pub-key ~/.ssh/id_rsa.pub mykeynova flavor-create myflavor auto 3200 45 1 openstack server create --wait --image windows2012R2 --flavor myflavor --key-name mykey --nic net-id=dd269a94-5b76-4e24-8046-4d377fa3be5f --min 1 --max 1 i1nova floating-ip-createnova floating-ip-associate i1 10.5.150.2./tools/sec_groups.sh

Windows镜像比较大,有31G,所以默认创建的glance硬盘不够会失败,删除glance节点再通过‘root-disk=90G’参数重新安装:

juju remove-unit glance/0juju remove-application glancejuju deploy cs:~openstack-charmers-next/glance --constraints "mem=1G root-disk=90G" --series trustyjuju add-relation nova-cloud-controller glancejuju add-relation nova-compute glancejuju add-relation glance mysqljuju add-relation glance keystonejuju add-relation glance "cinder:image-service"juju add-relation glance rabbitmq-server

至于如何在多层内网情况下仍然能够通过图形化RDP界面而不是命令行来访问windows虚机,可参考文章。

为虚机创建磁盘

为虚机创建磁盘:

cinder create --display_name test_volume 1nova volume-attach i1 1a3e3146-5df7-49ed-8041-de1de257a300root@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m sessiontcp: [1] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896etcp: [2] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a300root@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m node10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a30010.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896e

登录windows虚机后在Powershell中使用”Get-Disk”命令可以看到一个新磁盘。
同时登录计算节点看到libvirt已经为windows虚机生成了配置:

 <disk type='block' device='disk'>      <driver name='qemu' type='raw' cache='none'/>      <source dev='/dev/mapper/360014050fd353e4dd274f20b1abd70e4'/>      <target dev='vdb' bus='virtio'/>      <serial>1035ee80-339e-4e4e-b4c9-6c925cb259ea</serial>      <alias name='virtio-disk1'/>      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>    </disk>

mutlipath信息如下:

sudo apt install multipath-tools# multipath -ll360014050fd353e4dd274f20b1abd70e4 dm-0 LIO-ORG ,IBLOCK          size=1.0G features='0' hwhandler='0' wp=rw|-+- policy='round-robin 0' prio=1 status=active| `- 4:0:0:0 sdc   8:32   active ready  running`-+- policy='round-robin 0' prio=1 status=enabled  `- 5:0:0:0 sdd   8:48   active ready  runningroot@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m sessiontcp: [1] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896etcp: [2] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a300tcp: [3] 10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259eatcp: [4] 10.5.0.22:3261,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259earoot@juju-c0c753-trusty-icehouse-7:~# sudo iscsiadm -m node10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1a3e3146-5df7-49ed-8041-de1de257a30010.5.0.22:3261,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea10.5.0.22:3260,1 iqn.2010-10.org.openstack:volume-8237a312-7512-41f5-a02a-34856fa3896eroot@juju-c0c753-trusty-icehouse-7:~# ls /dev/mapper/360014050fd353e4dd274f20b1abd70e4/dev/mapper/360014050fd353e4dd274f20b1abd70e4

调试

echo 'show config' | sudo multipathd -kecho 'show map 360014051a18c4162bbe48939111e6439 topology' | sudo multipathd -kecho 'switch map 360014051a18c4162bbe48939111e6439 group 2' | sudo multipathd -kiscsiadm -m session -P 3sudo dd if=/dev/mapper/360014051a18c4162bbe48939111e6439 of=test  bs=1M count=10240iostat -m 1 20|grep -E "sda|sdb|Device"

detach磁盘

detach磁盘,在syslog中看到了一系列的错误日志,但是功能正常,能够正常detach。并且在重新用cinder create创建了一块再测试了很多遍再也没有遇到该问题。第一次发生这个问题的原因未知。

nova volume-dettach i1  1035ee80-339e-4e4e-b4c9-6c925cb259ea
2017-11-29 07:30:13.641 25277 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'186c37006bd94287ae768e1f80676584', 'tenant': u'becbf8797c954e2492d62a42a43a4324', 'user_identity': u'186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -'}2017-11-29 07:30:13.867 25277 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'186c37006bd94287ae768e1f80676584', 'tenant': u'becbf8797c954e2492d62a42a43a4324', 'user_identity': u'186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -'}2017-11-29 07:30:13.876 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Got semaphore "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:1912017-11-29 07:30:13.877 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Attempting to grab file lock "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:2022017-11-29 07:30:13.878 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Got file lock "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" at /var/lock/cinder/cinder-1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:2322017-11-29 07:30:14.245 25277 DEBUG cinder.volume.manager [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] volume 1035ee80-339e-4e4e-b4c9-6c925cb259ea: removing export detach_volume /usr/lib/python2.7/dist-packages/cinder/volume/manager.py:6872017-11-29 07:30:14.263 25277 INFO cinder.brick.iscsi.iscsi [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Removing iscsi_target: 1035ee80-339e-4e4e-b4c9-6c925cb259ea2017-11-29 07:30:14.264 25277 DEBUG cinder.openstack.common.processutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:1472017-11-29 07:30:14.738 25277 DEBUG cinder.openstack.common.processutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Result was 0 execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:1712017-11-29 07:30:14.754 25277 DEBUG cinder.openstack.common.lockutils [req-e5121fa9-5b8d-48ac-a378-21c5c049fda8 186c37006bd94287ae768e1f80676584 becbf8797c954e2492d62a42a43a4324 - - -] Released file lock "1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume" at /var/lock/cinder/cinder-1035ee80-339e-4e4e-b4c9-6c925cb259ea-detach_volume for method "lvo_inner2"... inner /usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py:2392017-11-29 07:31:02.297 25277 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._publish_service_capabilities run_periodic_tasks /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:1782017-11-29 07:31:02.300 25277 DEBUG cinder.manager [-] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python2.7/dist-packages/cinder/manager.py:1282017-11-29 07:31:02.321 25277 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:1782017-11-29 07:31:02.323 25277 INFO cinder.volume.manager [-] Updating volume status2017-11-29 07:31:02.323 25277 DEBUG cinder.volume.drivers.lvm [-] Updating volume stats _update_volume_stats /usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py:3462017-11-29 07:31:02.325 25277 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix cinder-volumes execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:1472017-11-29 07:31:02.469 25277 DEBUG cinder.openstack.common.processutils [-] Result was 0 execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:171Nov 29 07:29:57 juju-c0c753-trusty-icehouse-7 kernel: [86241.589558]  connection1:0: detected conn error (1020)Nov 29 07:29:57 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:29:57 juju-c0c753-trusty-icehouse-7 kernel: [86242.360928]  connection2:0: detected conn error (1020)Nov 29 07:29:58 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:00 juju-c0c753-trusty-icehouse-7 kernel: [86244.630641]  connection1:0: detected conn error (1020)Nov 29 07:30:00 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:00 juju-c0c753-trusty-icehouse-7 kernel: [86245.399222]  connection2:0: detected conn error (1020)Nov 29 07:30:01 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:03 juju-c0c753-trusty-icehouse-7 kernel: [86247.672850]  connection1:0: detected conn error (1020)Nov 29 07:30:03 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:03 juju-c0c753-trusty-icehouse-7 kernel: [86248.442433]  connection2:0: detected conn error (1020)Nov 29 07:30:04 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:06 juju-c0c753-trusty-icehouse-7 kernel: [86250.702435]  connection1:0: detected conn error (1020)Nov 29 07:30:06 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:06 juju-c0c753-trusty-icehouse-7 kernel: [86251.461198]  connection2:0: detected conn error (1020)Nov 29 07:30:07 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:09 juju-c0c753-trusty-icehouse-7 kernel: [86253.725045]  connection1:0: detected conn error (1020)Nov 29 07:30:09 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 kernel: [86254.494474]  connection2:0: detected conn error (1020)Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 kernel: [86254.659090] type=1400 audit(1511940610.186:17): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-6e468b3b-6cb4-4d5d-a9d6-6ba32b4bd8cb" pid=16120 comm="apparmor_parser"Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: add map (uevent)Nov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: devmap already registeredNov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:10 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: remove map (uevent)Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: 360014050fd353e4dd274f20b1abd70e4: devmap removedNov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: 360014050fd353e4dd274f20b1abd70e4: stop event checker thread (140170797860608)Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: dm-0: remove map (uevent)Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: uevent trigger errorNov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: sdc: remove path (uevent)Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 kernel: [86255.657776] sd 4:0:0:0: [sdc] Synchronizing SCSI cacheNov 29 07:30:11 juju-c0c753-trusty-icehouse-7 multipathd: sdd: remove path (uevent)Nov 29 07:30:11 juju-c0c753-trusty-icehouse-7 kernel: [86256.365042] sd 5:0:0:0: [sdd] Synchronizing SCSI cacheNov 29 07:30:12 juju-c0c753-trusty-icehouse-7 iscsid: Connection3:0 to [target: iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea, portal: 10.5.0.22,3260] through [iface: default] is shutdown.Nov 29 07:30:12 juju-c0c753-trusty-icehouse-7 iscsid: Connection4:0 to [target: iqn.2010-10.org.openstack:volume-1035ee80-339e-4e4e-b4c9-6c925cb259ea, portal: 10.5.0.22,3261] through [iface: default] is shutdown.Nov 29 07:30:12 juju-c0c753-trusty-icehouse-7 kernel: [86257.129920]  connection1:0: detected conn error (1020)Nov 29 07:30:13 juju-c0c753-trusty-icehouse-7 kernel: [86257.893417]  connection2:0: detected conn error (1020)Nov 29 07:30:13 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:14 juju-c0c753-trusty-icehouse-7 iscsid: conn 0 login rejected: target error (03/01)Nov 29 07:30:16 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)Nov 29 07:30:16 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)Nov 29 07:30:19 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)Nov 29 07:30:20 juju-c0c753-trusty-icehouse-7 iscsid: connect to 10.5.0.22:3260 failed (Connection refused)
原创粉丝点击