openstack 管理二十四 - ceph 与 vm 连接测试记录
来源:互联网 发布:淘宝退货 退款金额 编辑:程序博客网 时间:2024/04/29 16:25
目的
测试 ceph 与 vm 连接与使用
创建 vm
主机 128030 及 129094 是全新安装并利用 puppet 推送的 nova compute 主机
计划在这两个主机上进行 vm 连接 ceph 测试
nova boot --flavor b2c_web_1core --image Centos6.3_1.3 --security_group default --nic net-id=9106aee4-2dc0-4a6d-a789-10c53e2b88c1 ceph-test01.sh.vclound.com --availability-zone nova:sh-compute-128030.sh.vclound.com+--------------------------------------+------------------------------------------------------+| Property | Value |+--------------------------------------+------------------------------------------------------+| OS-DCF:diskConfig | MANUAL || OS-EXT-AZ:availability_zone | nova || OS-EXT-SRV-ATTR:host | - || OS-EXT-SRV-ATTR:hypervisor_hostname | - || OS-EXT-SRV-ATTR:instance_name | instance-0000020d || OS-EXT-STS:power_state | 0 || OS-EXT-STS:task_state | scheduling || OS-EXT-STS:vm_state | building || OS-SRV-USG:launched_at | - || OS-SRV-USG:terminated_at | - || accessIPv4 | || accessIPv6 | || adminPass | 5DCHoj8ihwN6 || config_drive | || created | 2015-06-25T03:49:14Z || flavor | b2c_web_1core (5) || hostId | || id | 99d37977-a13a-4a8b-b8b1-e613a4959623 || image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) || key_name | - || metadata | {} || name | ceph-test01.sh.vclound.com || os-extended-volumes:volumes_attached | [] || progress | 0 || security_groups | default || status | BUILD || tenant_id | bb0b51d166254dc99bc7462c0ac002ff || updated | 2015-06-25T03:49:14Z || user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |+--------------------------------------+------------------------------------------------------+nova boot --flavor b2c_web_1core --image Centos6.3_1.3 --security_group default --nic net-id=9106aee4-2dc0-4a6d-a789-10c53e2b88c1 ceph-test02.sh.vclound.com --availability-zone nova:sh-compute-129094.sh.vclound.com+--------------------------------------+------------------------------------------------------+| Property | Value |+--------------------------------------+------------------------------------------------------+| OS-DCF:diskConfig | MANUAL || OS-EXT-AZ:availability_zone | nova || OS-EXT-SRV-ATTR:host | - || OS-EXT-SRV-ATTR:hypervisor_hostname | - || OS-EXT-SRV-ATTR:instance_name | instance-0000020f || OS-EXT-STS:power_state | 0 || OS-EXT-STS:task_state | scheduling || OS-EXT-STS:vm_state | building || OS-SRV-USG:launched_at | - || OS-SRV-USG:terminated_at | - || accessIPv4 | || accessIPv6 | || adminPass | wHddAW33sFBE || config_drive | || created | 2015-06-25T03:51:03Z || flavor | b2c_web_1core (5) || hostId | || id | b433b227-14ab-4157-8f08-362ad680e35e || image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) || key_name | - || metadata | {} || name | ceph-test02.sh.vclound.com || os-extended-volumes:volumes_attached | [] || progress | 0 || security_groups | default || status | BUILD || tenant_id | bb0b51d166254dc99bc7462c0ac002ff || updated | 2015-06-25T03:51:03Z || user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |+--------------------------------------+------------------------------------------------------+
instance 状态
[root@sh-controller-129022 ~(keystone_admin)]# nova list+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+| ID | Name | Status | Task State | Power State | Networks |+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+| 99d37977-a13a-4a8b-b8b1-e613a4959623 | ceph-test01.sh.vclound.com | ACTIVE | - | Running | SH_DEV_NET=10.198.192.254 || b433b227-14ab-4157-8f08-362ad680e35e | ceph-test02.sh.vclound.com | ACTIVE | - | Running | SH_DEV_NET=10.198.192.255 |+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+
创建云盘
[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2015-06-25T06:11:58.840626 || display_description | None || display_name | None || encrypted | False || id | 8516fb02-b578-4e57-9678-d30d2b0a6734 || metadata | {} || size | 50 || snapshot_id | None || source_volid | None || status | creating || volume_type | None |+---------------------+--------------------------------------+[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2015-06-25T06:12:07.151001 || display_description | None || display_name | None || encrypted | False || id | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f || metadata | {} || size | 50 || snapshot_id | None || source_volid | None || status | creating || volume_type | None |+---------------------+--------------------------------------+[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2015-06-25T06:12:14.321030 || display_description | None || display_name | None || encrypted | False || id | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 || metadata | {} || size | 50 || snapshot_id | None || source_volid | None || status | creating || volume_type | None |+---------------------+--------------------------------------+
查询云盘
[root@sh-controller-129022 ~(keystone_admin)]# cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 8516fb02-b578-4e57-9678-d30d2b0a6734 | available | None | 50 | None | false | || 9d8aa395-5e6a-411a-9f19-6375f29e9f9f | available | None | 50 | None | false | || a5751c38-01c0-4f25-a02c-7d2a05d6ea36 | available | None | 50 | None | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
连接云盘
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach 99d37977-a13a-4a8b-b8b1-e613a4959623 8516fb02-b578-4e57-9678-d30d2b0a6734+----------+--------------------------------------+| Property | Value |+----------+--------------------------------------+| device | /dev/vdc || id | 8516fb02-b578-4e57-9678-d30d2b0a6734 || serverId | 99d37977-a13a-4a8b-b8b1-e613a4959623 || volumeId | 8516fb02-b578-4e57-9678-d30d2b0a6734 |+----------+--------------------------------------+[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach 99d37977-a13a-4a8b-b8b1-e613a4959623 9d8aa395-5e6a-411a-9f19-6375f29e9f9f+----------+--------------------------------------+| Property | Value |+----------+--------------------------------------+| device | /dev/vdd || id | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f || serverId | 99d37977-a13a-4a8b-b8b1-e613a4959623 || volumeId | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f |+----------+--------------------------------------+[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach b433b227-14ab-4157-8f08-362ad680e35e a5751c38-01c0-4f25-a02c-7d2a05d6ea36+----------+--------------------------------------+| Property | Value |+----------+--------------------------------------+| device | /dev/vdc || id | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 || serverId | b433b227-14ab-4157-8f08-362ad680e35e || volumeId | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 |+----------+--------------------------------------+
检测云盘
[root@sh-controller-129022 ~(keystone_admin)]# nova show 99d37977-a13a-4a8b-b8b1-e613a4959623+--------------------------------------+--------------------------------------------------------------------------------------------------+| Property | Value |+--------------------------------------+--------------------------------------------------------------------------------------------------+| OS-DCF:diskConfig | MANUAL || OS-EXT-AZ:availability_zone | nova || OS-EXT-SRV-ATTR:host | sh-compute-128030.sh.vclound.com || OS-EXT-SRV-ATTR:hypervisor_hostname | sh-compute-128030.sh.vclound.com || OS-EXT-SRV-ATTR:instance_name | instance-0000020d || OS-EXT-STS:power_state | 1 || OS-EXT-STS:task_state | - || OS-EXT-STS:vm_state | active || OS-SRV-USG:launched_at | 2015-06-25T03:49:26.000000 || OS-SRV-USG:terminated_at | - || SH_DEV_NET network | 10.198.192.254 || accessIPv4 | || accessIPv6 | || config_drive | || created | 2015-06-25T03:49:14Z || flavor | b2c_web_1core (5) || hostId | 8b5b75df8b0271d739323f1373b7363d432bb9c68b079ab3e94e1c1a || id | 99d37977-a13a-4a8b-b8b1-e613a4959623 || image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) || key_name | - || metadata | {} || name | ceph-test01.sh.vclound.com || os-extended-volumes:volumes_attached | [{"id": "8516fb02-b578-4e57-9678-d30d2b0a6734"}, {"id": "9d8aa395-5e6a-411a-9f19-6375f29e9f9f"}] | <-挂载两个云盘| progress | 0 || security_groups | default || status | ACTIVE || tenant_id | bb0b51d166254dc99bc7462c0ac002ff || updated | 2015-06-25T03:49:26Z || user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |+--------------------------------------+--------------------------------------------------------------------------------------------------+[root@sh-controller-129022 ~(keystone_admin)]# nova show b433b227-14ab-4157-8f08-362ad680e35e+--------------------------------------+----------------------------------------------------------+| Property | Value |+--------------------------------------+----------------------------------------------------------+| OS-DCF:diskConfig | MANUAL || OS-EXT-AZ:availability_zone | nova || OS-EXT-SRV-ATTR:host | sh-compute-129094.sh.vclound.com || OS-EXT-SRV-ATTR:hypervisor_hostname | sh-compute-129094.sh.vclound.com || OS-EXT-SRV-ATTR:instance_name | instance-0000020f || OS-EXT-STS:power_state | 1 || OS-EXT-STS:task_state | - || OS-EXT-STS:vm_state | active || OS-SRV-USG:launched_at | 2015-06-25T03:52:05.000000 || OS-SRV-USG:terminated_at | - || SH_DEV_NET network | 10.198.192.255 || accessIPv4 | || accessIPv6 | || config_drive | || created | 2015-06-25T03:51:03Z || flavor | b2c_web_1core (5) || hostId | a5239c63509fa00ab056ca701363538ecc0afe41d8f886f82b345b4d | <- 挂载一个| id | b433b227-14ab-4157-8f08-362ad680e35e || image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) || key_name | - || metadata | {} || name | ceph-test02.sh.vclound.com || os-extended-volumes:volumes_attached | [{"id": "a5751c38-01c0-4f25-a02c-7d2a05d6ea36"}] || progress | 0 || security_groups | default || status | ACTIVE || tenant_id | bb0b51d166254dc99bc7462c0ac002ff || updated | 2015-06-25T03:52:05Z || user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |+--------------------------------------+----------------------------------------------------------+
检测 cinder 状态
[root@sh-controller-129022 ~(keystone_admin)]# cinder list+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+| 8516fb02-b578-4e57-9678-d30d2b0a6734 | in-use | None | 50 | None | false | 99d37977-a13a-4a8b-b8b1-e613a4959623 || 9d8aa395-5e6a-411a-9f19-6375f29e9f9f | in-use | None | 50 | None | false | 99d37977-a13a-4a8b-b8b1-e613a4959623 || a5751c38-01c0-4f25-a02c-7d2a05d6ea36 | in-use | None | 50 | None | false | b433b227-14ab-4157-8f08-362ad680e35e |+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
测试
测试云盘读写
[root@ceph-test01 ~]# pvcreate /dev/vdc /dev/vdd Physical volume "/dev/vdc" successfully created Physical volume "/dev/vdd" successfully created[root@ceph-test01 ~]# vgcreate myvg /dev/vdc /dev/vdd Volume group "myvg" successfully created[root@ceph-test01 ~]# lvcreate -i 2 -n mylv -l 100%FREE myvg Using default stripesize 64.00 KiB Logical volume "mylv" created[root@ceph-test01 ~]# yum install -y xfsprogs.x86_64 > /dev/null 2>&1[root@ceph-test01 ~]# mkfs.xfs /dev/myvg/mylvmeta-data=/dev/myvg/mylv isize=256 agcount=16, agsize=1638256 blks = sectsz=512 attr=2, projid32bit=0data = bsize=4096 blocks=26212096, imaxpct=25 = sunit=16 swidth=32 blksnaming =version 2 bsize=4096 ascii-ci=0log =internal log bsize=4096 blocks=12800, version=2 = sectsz=512 sunit=16 blks, lazy-count=1realtime =none extsz=4096 blocks=0, rtextents=0[root@ceph-test01 ~]# mount /dev/myvg/mylv /mnt[root@ceph-test01 ~]# df -h文件系统 容量 已用 可用 已用%% 挂载点/dev/vda1 20G 1.1G 18G 6% /tmpfs 939M 0 939M 0% /dev/shm/dev/mapper/myvg-mylv 100G 33M 100G 1% /mnt
查询当前 ceph 情况
[root@sh-ceph-128213 ~]# ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 290T 290T 4615M 0POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 99157G 0 volumes 1 37220k 0 99157G 28 <- 留意当前默认下 ceph 只有 28 个对象
在 vm 上执行 操作
[root@ceph-test01 ~]# dd if=/dev/zero of=/mnt/1.img bs=1M count=700000dd: 正在写入"/mnt/1.img": 设备上没有空间记录了102309+0 的读入记录了102308+0 的写出107278180352字节(107 GB)已复制,982.231 秒,109 MB/秒
监控 ceph 存储空间
[root@sh-ceph-128212 var]# ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 290T 290T 337G 0.11POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 99033G 0 volumes 1 102399M 0.03 99033G 25622 <- dd 在客户端有 1 个文件, 但在 ceph 中会出现 [25622-28] 个文件 2.5 万个小文件
监控 ceph 物理存储空间
[root@sh-ceph-128212 ceph-0]# df -h文件系统 容量 已用 可用 已用% 挂载点/dev/mapper/centos-root 50G 1.8G 49G 4% /devtmpfs 32G 0 32G 0% /devtmpfs 32G 0 32G 0% /dev/shmtmpfs 32G 18M 32G 1% /runtmpfs 32G 0 32G 0% /sys/fs/cgroup/dev/sda2 494M 123M 372M 25% /boot/dev/mapper/centos-home 3.6T 33M 3.6T 1% /home/dev/sdb1 3.7T 3.8G 3.7T 1% /var/lib/ceph/osd/ceph-0 /dev/sdc1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-1/dev/sdd1 3.7T 4.3G 3.7T 1% /var/lib/ceph/osd/ceph-2/dev/sdf1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-3/dev/sdg1 3.7T 4.1G 3.7T 1% /var/lib/ceph/osd/ceph-4/dev/sdh1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-5/dev/sde1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-6/dev/sdi1 3.7T 3.7G 3.7T 1% /var/lib/ceph/osd/ceph-7/dev/sdj1 3.7T 3.9G 3.7T 1% /var/lib/ceph/osd/ceph-8 /dev/sdk1 3.7T 4.6G 3.7T 1% /var/lib/ceph/osd/ceph-9 <- 参考已用空间
发现之前 dd 的文件是被打散, 分布在各个不同的磁盘中, 而且分布数量也不一定是一样, 没有在 /var/lib/ceph/osd/ceph* 目录中找到一个具体的文件名, 只看见一堆被打散的小文件
卸载云盘
[root@ceph-test01 ~]# rm -rf /mnt/1.img[root@ceph-test01 ~]# umount /mnt
检测 ceph 空间
[root@sh-controller-128022 cinder]# ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 290T 290T 304G 0.10POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 99044G 0 volumes 1 102399M 0.03 99044G 25622 <- 删除 1.img 文件 不会直接影响 ceph 磁盘容量
卸载 openstack 中云盘
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-detach 99d37977-a13a-4a8b-b8b1-e613a4959623 8516fb02-b578-4e57-9678-d30d2b0a6734[root@sh-controller-129022 ~(keystone_admin)]# nova volume-detach 99d37977-a13a-4a8b-b8b1-e613a4959623 9d8aa395-5e6a-411a-9f19-6375f29e9f9f
删除 openstack 云盘
[root@sh-controller-129022 ~(keystone_admin)]# cinder delete 8516fb02-b578-4e57-9678-d30d2b0a6734[root@sh-controller-129022 ~(keystone_admin)]# cinder delete 9d8aa395-5e6a-411a-9f19-6375f29e9f9f
查询 ceph 存储空间
[root@sh-controller-128022 cinder]# ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 290T 290T 15961M 0POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 99131G 0 volumes 1 37220k 0 99131G 24 <- 之前产生的 1img 的对象被删除了[root@sh-ceph-128212 ~]# df -h文件系统 容量 已用 可用 已用% 挂载点/dev/mapper/centos-root 50G 1.8G 49G 4% /devtmpfs 32G 0 32G 0% /devtmpfs 32G 0 32G 0% /dev/shmtmpfs 32G 18M 32G 1% /runtmpfs 32G 0 32G 0% /sys/fs/cgroup/dev/sda2 494M 123M 372M 25% /boot/dev/mapper/centos-home 3.6T 33M 3.6T 1% /home/dev/sdb1 3.7T 58M 3.7T 1% /var/lib/ceph/osd/ceph-0 <-参考, 磁盘也恢复了之前大小/dev/sdc1 3.7T 62M 3.7T 1% /var/lib/ceph/osd/ceph-1/dev/sdd1 3.7T 55M 3.7T 1% /var/lib/ceph/osd/ceph-2/dev/sdf1 3.7T 61M 3.7T 1% /var/lib/ceph/osd/ceph-3/dev/sdg1 3.7T 63M 3.7T 1% /var/lib/ceph/osd/ceph-4/dev/sdh1 3.7T 56M 3.7T 1% /var/lib/ceph/osd/ceph-5/dev/sde1 3.7T 56M 3.7T 1% /var/lib/ceph/osd/ceph-6/dev/sdi1 3.7T 63M 3.7T 1% /var/lib/ceph/osd/ceph-7/dev/sdj1 3.7T 59M 3.7T 1% /var/lib/ceph/osd/ceph-8/dev/sdk1 3.7T 60M 3.7T 1% /var/lib/ceph/osd/ceph-9
证明, 删除云盘能够会自动释放空间
0 0
- openstack 管理二十四 - ceph 与 vm 连接测试记录
- openstack 管理二十三 - nova compute 连接 ceph 集群
- openstack 管理三十八 - ceph 与 crushmap
- CEPH与OPENSTACK
- ceph与OpenStack整合
- openstack 与 ceph (架构)
- openstack连接ceph不成功解决
- Ceph与OpenStack整合文档
- openstack 与 ceph (monitor初始化)
- openstack 与 ceph (osd 部署)
- ceph存储 "ceph集群浅析六"Ceph与OpenStack
- openstack 命令行管理十四 - 路由管理 (备忘)
- 【二十四】记录
- “Ceph浅析”系列之(五)——Ceph与OpenStack
- ceph存储 关于OpenStack与Ceph集成的若干参考
- “Ceph浅析”系列之六——Ceph与OpenStack
- “Ceph浅析”系列之六Ceph与OpenStack
- Ceph专题三 Ceph与OpenStack集成和行业应用
- eclipse使用maven插件创建web项目
- 如何设置窗口立即刷新显示
- 数据结构之数组和链表之面试篇
- 图的基本存储的基本方式四
- js 生成二维码 qrcode.js
- openstack 管理二十四 - ceph 与 vm 连接测试记录
- 源代码安装pg 9.4
- java--在线运行
- python 读文件 和行号
- 简单理解PHP的面向对象编程方式
- 异常处理04
- Apache 服务器安装
- Java中强大的format
- 18b20温度数码管显示