Linux软RAID实现

来源:互联网 发布:linux搭建java环境 编辑:程序博客网 时间:2024/05/18 03:43
一、软RAID类型:
       1RAID 0:条带,无冗余,容易出故障,两个或者两个以上的磁盘创建的一个阵列,阵列的大小为阵列中所有磁盘大小的和。如果有21G的磁盘做成RAID0阵列,则阵列的大小就是2G
       2RAID 1:镜像,能最大限度的保证用户数据的可用性和可修复性。 两个磁盘含有同时更新的相同数据,提供非常好的冗余保护,防止磁盘出错。写入性能低,是唯一可以挂在/boot分区的RAID类型Mirror的磁盘空间利用率低,阵列大小等于阵列中最小的磁盘的容量。
       3RAID 5:由三个或者三个以上的磁盘组成,带零或者多个热交换磁盘。能很好的在性能和可靠性之间找到平衡,可以分离所有磁盘间的成对性一获得冗余,某个磁盘的损坏不会影响到真个阵列,读、写性能都有提高。
二、Linux下的mdadm命令:
       1、创建RAID设备:
       1RAID0madam -C /dev/md0 -a yes -l 0 -n 2 /dev/sdb5 /dev/sdb6
————————————————————————————————————
[root@station1 proc]# mdadm -C /dev/md0 -l 0 -n 2 /dev/sdb5 /dev/sdb6
mdadm: /dev/sdb5 appears to contain an ext2fs file system
    size=987964K  mtime=Thu Jan  1 08:00:00 1970
mdadm: /dev/sdb5 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Fri Aug 14 15:16:24 2009
mdadm: /dev/sdb6 appears to contain an ext2fs file system
    size=987964K  mtime=Thu Jan  1 08:00:00 1970
mdadm: /dev/sdb6 appears to be part of a raid array:
    level=raid1 devices=2 ctime=Fri Aug 14 15:16:24 2009
Continue creating array? (y/n) y
mdadm: array /dev/md0 started.
—————————————————————————————
-C:创建RAID设备;
       /dev/md0:新建RAID设备的名字;
       - a yes:创建设备文件;
       -l 0:创建的RAID设备的级别,后面的0可以是15
       -n 2:新RAID设备使用的磁盘数量是2个;
       /dev/sdb5 /dev/sdb6:新RAID就是由这两个设备构成的。
       2RAID1 mdadm -C /dev/md1 -l 1 -n 2 /dev/sdb7 /dev/sdb8
       3RAID5mdadm -C /dev/md3 -l 5 -n 3 /dev/sdb1 /dev/sdb2 /dev/sdb3
       2、查看RAID设备情况:
              mdm --detail /dev/md0
—————————————————————————————
[root@station1 proc]# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Fri Aug 14 15:23:42 2009
     Raid Level : raid0
     Array Size : 1975680 (1929.70 MiB 2023.10 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent
    Update Time : Fri Aug 14 15:23:42 2009
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
     Chunk Size : 64K
           UUID : 271f853e:bf021179:1391832c:c53ea858
         Events : 0.1
    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync  /dev/sdb5
       1       8       22        1      active sync  /dev/sdb6
—————————————————————————————
3、删除RAID设备:
       mdadm -S /dev/md0
-------------------------------------------------------------------------------
[root@station1 ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
-------------------------------------------------------------------------------
4、格式化RAID设备:
mkfs.ext3 /dev/md0
       mke2fs -j /dev/md0
       5、模拟磁盘故障:
       mdadm /dev/md3 -f /dev/sda1
-------------------------------------------------------------------------------
[root@station1 ~]# mdadm /dev/md3 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md3
[root@station1 ~]# mdadm --detail /dev/md3
/dev/md3:
        Version : 00.90.03
  Creation Time : Fri Aug 14 15:21:26 2009
     Raid Level : raid5
     Array Size : 1975680 (1929.70 MiB 2023.10 MB)
  Used Dev Size : 987840 (964.85 MiB 1011.55 MB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 3
    Persistence : Superblock is persistent
    Update Time : Fri Aug 14 15:34:19 2009
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 1
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 64K
           UUID : 5cb4d436:48c9041b:da36683d:589cd18d
         Events : 0.4
    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       18        1      active sync  /dev/sdb2
       2       8       19        2      active sync  /dev/sdb3
 
       3       8       17        -      faulty spare  /dev/sdb1
-------------------------------------------------------------------------------
6、将故障磁盘从RAID设备中移走:
       mdadm /dev/md3 -r /dev/sdb1
-------------------------------------------------------------------------------
[root@station1 ~]# mdadm /dev/md3 -r /dev/sdb1
mdadm: hot removed /dev/sdb1
[root@station1 ~]# mdadm --detail /dev/md3
……中间部分省略……
    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       18        1      active sync  /dev/sdb2
       2       8       19        2      active sync  /dev/sdb3
-------------------------------------------------------------------------------
7、添加新的磁盘到RAID设备中:
       mdadm /dev/md3 -a /dev/sdb5
-------------------------------------------------------------------------------
[root@station1 ~]# mdadm /dev/md3 -a /dev/sdb5
mdadm: added /dev/sdb5
[root@station1 ~]# mdadm --detail /dev/md3
/dev/md3:
    ...中间部分省略...
    Number   Major   Minor   RaidDevice State
       0       8       21        0      active sync  /dev/sdb5
       1       8       18        1      active sync  /dev/sdb2
       2       8       19        2      active sync  /dev/sdb3
-------------------------------------------------------------------------------
       8RAID系统信息:
       cat /proc/mdstat
-------------------------------------------------------------------------------
[root@station1 ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md3 : active raid5 sdb1[0] sdb3[2] sdb2[1]
      1975680 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md1 : active raid1 sdb8[1] sdb7[0]
      987840 blocks [2/2] [UU]
md0 : active raid0 sdb6[1] sdb5[0]
      1975680 blocks 64k chunks
-unused devices: <none>
-------------------------------------------------------------------------------
三、RAID设备用于LVM逻辑卷:
       1、把RAID设备处理成物理卷:
       pvcreate /dev/md0       pvcreate /dev/md1
-------------------------------------------------------------------------------
[root@station1 ~]# pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created
[root@station1 ~]# pvcreate /dev/md1
  Physical volume "/dev/md1" successfully created
-------------------------------------------------------------------------------
       2、把RAID设备生成的物理卷处理成卷组:
       vgcreate vg0 /dev/md0 /dev/md1
--------------------------------------------------------------------------------
[root@station1 ~]# vgcreate vg0 /dev/md0 /dev/md1
  Volume group "vg0" successfully created
-------------------------------------------------------------------------------
       3、生成逻辑卷:
       lvcreate -L 1000M -n test vg0
-------------------------------------------------------------------------------
[root@station1 ~]# lvcreate -L 1000M -n test vg0
  Logical volume "test" created
-------------------------------------------------------------------------------

原创粉丝点击