drbd配置和报错

来源:互联网 发布:淘宝上买的apple app 编辑:程序博客网 时间:2024/05/21 17:06

环境:redhat 6.5 内核:2.6.32-431.el6.x86_64(drbd对内核有要求,如果不匹配会报错)

node1:192.168.1.61

node2:192.168.1.67

两台服务器都加了一块900Ghdd盘

1.首先把两台服务器名字修改

vim /etc/host

192.168.1.61  node1
192.168.1.67  node2

vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=node2

2.把900G硬盘分区并格式化

fdisk /dev/sdb(n-p-1-回车-回车-m)

mkfs.ext4 /dev/sdb1

reboot

3.yum install -y kernel-devel kernel-headers flex

从官网上下载drbd-8.4.3.tar.gz包www.drbd.org

tar -zxvf drbd-8.4.3.tar.gz

cd drbd-8.4.3

./configure --prefix=/usr/local/drbd --with-km

make KDIR=/usr/src/kernels/2.6.32-431.el6.x86_64/

make install

cp /usr/local/drbd/etc/rc.d/init/d/drbd /etc/rc.d/init.d/

chkconfig --add drbd

chkconfig drbd on

cd drbd

make clean

make KDIR=/usr/src/kernels/2.6.32-431.el6.x86_64/

cp drbd.ko /lib/modules/`uname -r`/kernel/lib

nodprobe drbd

lsmod | grep drbd    (查看drbd模块加载是否成功)

drbd                  325658  3 
libcrc32c               1246  1 drbd

4.配置文件

cd /usr/local/drbd/etc

cat drbd.conf

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example


include "drbd.d/global_common.conf";
include "drbd.d/*.res";

主配置文件里面已经包含了全局配置文件和drbd目录下以.res结尾的文件

cd drbd.d

vim global_common.conf

global { 
    usage-count yes; #是否参加drbd的使用者统计,默认此选项为yes 
    # minor-count dialog-refresh disable-ip-verification 

common { 
        protocol C; #使用drbd的同步协议 
    handlers { 
        # These are EXAMPLE handlers only. 
        # They may have severe implications, 
        # like hard resetting the node under certain circumstances. 
        # Be careful when chosing your poison. 
        pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; 
        pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; 
        local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; 
        # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; 
        # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; 
        # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; 
        # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; 
        # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; 
    } 
    startup { 
        # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb 
    } 
    options { 
        # cpu-mask on-no-data-accessible 
    } 
    disk { 
                on-io-error detach; #配置I/O错误处理策略为分离 
        # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes 
        # disk-drain md-flushes resync-rate resync-after al-extents 
                # c-plan-ahead c-delay-target c-fill-target c-max-rate 
                # c-min-rate disk-timeout 
    } 
    net { 

allow-two-primaries yes;
        after-sb-0pri discard-zero-changes;
        after-sb-1pri discard-secondary;
        after-sb-2pri disconnect;
 (红色部分可以不用添加;如果添加为双主模式)
        # protocol timeout max-epoch-size max-buffers unplug-watermark 
        # connect-int ping-int sndbuf-size rcvbuf-size ko-count 
        # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri 
        # after-sb-1pri after-sb-2pri always-asbp rr-conflict 
        # ping-timeout data-integrity-alg tcp-cork on-congestion 
        # congestion-fill congestion-extents csums-alg verify-alg 
        # use-rle 
    } 
      syncer { 
              rate 1024M; #设置主备节点同步时的网络速率 


 
vim drbd.res (添加该配置文件)
resource  r1 {  #这个r1是定义资源的名字 
          on  node1 {            #on开头,后面是主机名称 
          device    /dev/drbd0;  #drbd设备名称 
          disk      /dev/sdb1;  #drbd0使用的磁盘分区为sdb1 
          address  192.168.1.61:7789; #设置drbd监听地址与端口 
          meta-disk  internal; 
      } 
          on  node2 {
          device    /dev/drbd0;  
          disk      /dev/sdb1;   
          address  192.168.1.67:7789; 
          meta-disk  internal; 
      } 

5.初始化资源

drbdadm create-md r1

Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
success

service drbd start

Starting DRBD resources: [
     create res: r1
   prepare disk: r1
    adjust disk: r1
     adjust net: r1
]
.

netstat -anput|grep 7789

tcp        0      0 192.168.1.67:7789           192.168.1.61:46249          ESTABLISHED -                   
tcp        0      0 192.168.1.67:37779          192.168.1.61:7789           ESTABLISHED -  

到此已完成drbd的安装配置

6.drbd一些常见的报错

Q1:drbdadm create-md r1: exited with coolcode 40?

执行drbdadm create-md r1时出现如下信息

open(/dev/sdb1) failed: No such file or directory 
 Command 'drbdmeta 0 v08 /dev/sdb1 internal create-md' terminated with exit coolcode 20 
 drbdadm create-md r1: exited with coolcode 40

原因:没有fdisk /dev/sdb建立分区

Q2:执行drbdadm create-md r1出现

 Failure: (104) Can not open backing device. 

Command 'drbdsetup attach 1 /dev/sdb1 /dev/sdb1 internal' terminated with exit code 10

原因:可能硬盘有raid信息,进入webbios清除raid信息,重新安装系统

Q3:挂载drbd分区mount /dev/drbd0 /ceshi

mount: you must specify the filesystem type

原因:查看当前节点状态drbdadm role r1

Secondary/Secondary

如果都为Secondary修改当前状态

drbdadm primary r1

备节点没有权限挂载修改drbd分区




0 0