Redis入门 (CentOS7 + Redis-3.2.1)

来源:互联网 发布:江湖家政o2o源码下载 编辑:程序博客网 时间:2024/05/20 01:07
1. 编译安装1.1 下载redis
# cd /tmp/# wget http://download.redis.io/releases/redis-3.2.1.tar.gz# tar zxvf redis-3.2.1.tar.gz# cd redis-3.2.1/


1.2 编译redis
# make

错误:

找不到 

解决:

make distclean

然后重新make;redis的代码包里有自带的jemalloc;

1.3 测试
# yum install tcl

 

# make test

错误1:

 

You need tcl 8.5 or newer in order to run the Redis test

解决:

# yum install tcl.x86_64

错误2:

[exception]: Executing test client: NOREPLICAS Not enough good slaves to write..
NOREPLICAS Not enough good slaves to write.

......

Killing still running Redis server 63439
Killing still running Redis server 63486
Killing still running Redis server 63519
Killing still running Redis server 63546
Killing still running Redis server 63574
Killing still running Redis server 63591
I/O error reading reply

......

解决:

vim tests/integration/replication-2.tcl

- after 1000

+ after 10000

错误3:

[err]: Slave should be able to synchronize with the master in tests/integration/replication-psync.tcl
Replication not started.

解决:

遇见过一次,重试make test就ok了。

1.4 安装redis

 

# make install# cp redis.conf /usr/local/etc/# cp src/redis-trib.rb /usr/local/bin/


2. standalone模式2.1 配置redis

 

# vim etc/redis.conf    daemonize yeslogfile "/var/run/redis/log/redis.log"pidfile /var/run/redis/pid/redis_6379.piddbfilename redis.rdbdir /var/run/redis/rdb/

 

2.2 启动redis

 

# mkdir -p /var/run/redis/log# mkdir -p /var/run/redis/rdb# mkdir -p /var/run/redis/pid# /usr/local/bin/redis-server /usr/local/etc/redis.conf# ps -ef | grep redisroot      71021      1  0 15:46 ?        00:00:00 /usr/local/bin/redis-server 127.0.0.1:6379

 

2.3 测试redis

 

# /usr/local/bin/redis-cli127.0.0.1:6379> set country chinaOK127.0.0.1:6379> get country"china"127.0.0.1:6379> set country americaOK127.0.0.1:6379> get country"america"127.0.0.1:6379> exists country(integer) 1127.0.0.1:6379> del country(integer) 1127.0.0.1:6379> exists country(integer) 0127.0.0.1:6379>exit

 

2.4 停止redis

 

# /usr/local/bin/redis-cli shutdown  

 

3. master-slave模式3.1 配置redis

为了测试master-slave模式,我需要在一个host上启动2个redis实例(有条件的话,当然可以使用多个host,每个host运行一个redis实例)。为此,需要把redis.conf复制多份:

 

# cp /usr/local/etc/redis.conf /usr/local/etc/redis_6379.conf# cp /usr/local/etc/redis.conf /usr/local/etc/redis_6389.conf


配置实例6379:

 

# vim /usr/local/etc/redis_6379.conf daemonize yes port 6379 logfile "/var/run/redis/log/redis_6379.log" pidfile /var/run/redis/pid/redis_6379.pid dbfilename redis_6379.rdb dir /var/run/redis/rdb/ min-slaves-to-write 1 min-slaves-max-lag 10

 

最后两项配置表示: 在10秒内,至少1个slave ping过master;

 

配置实例6389:

 

# vim /usr/local/etc/redis_6389.conf daemonize yes port 6389 slaveof 127.0.0.1 6379 logfile "/var/run/redis/log/redis_6389.log" pidfile /var/run/redis/pid/redis_6389.pid dbfilename redis_6389.rdb dir /var/run/redis/rdb/ repl-ping-slave-period 10

 

易见,我将要启动两个redis实例,一个使用端口6379(默认端口),另一个使用6389;并且,前者为master,后者为slave。

repl-ping-slave-period表示slave向master发送PING的频率,单位是秒。

 

另外,在6389的配置文件中,有如下配置可以修改(一般不需修改):

 

slave-read-only yes slave-serve-stale-data yes

 

第一个表示:slave是只读的;

第二个表示:当slave在同步新数据(从master同步数据)的时候,它使用旧的数据服务client。这使得slave是非阻塞的。

在6379的配置文件中,有如下配置可以修改(一般不需修改):

 

# repl-backlog-size 1mbrepl-diskless-sync norepl-diskless-sync-delay 5

 

对于这几项,故事是这样的:

1. 对于一直保持着连接的slave,可以通过增量同步来达到主从一致。

2. 断开重连的slave可能通过部分同步来达到一致(redis 2.8之后才有这一功能,此版本之前,只能和新的slave一样,通过完整同步来达到一致),机制是:

master在内存中记录一些replication积压量;重连的slave与master就replication offset和master run id进行协商:若master run id没变(即master没有重启),并且slave请求的replication offset在积压量里,就可以从offset开始进行部分同步来达到一致。这两个条件任何一个不满足,就必须进行完整同步了。repl-backlog-size 1mb就是用于配置replication积压量大小的。

3. 对于新的slave或者无法通过部分同步达到一致的重连slave,而必须进行完整同步,即传送一个RDB文件。master有两种方式来传这个RDB文件:

 

disk-backed:在磁盘上生成RDB文件,然后传送给slave;diskless:不在磁盘上生成RDB文件,而是一边生成RDB数据,一边直接写到socket;

 

repl-diskless-sync用于配置使用哪种策略。对于前者,在磁盘上生成一次RDB文件,可以服务多个slave;而对于后者,一旦传送开始,新来的slave只能排队(等当前slave同步完成)。所以,master在开始传送之前,可能希望推迟一会儿,希望来更多的slave,这样master就可以并行的把生成的数据传送给他们。参数repl-diskless-sync-delay 就是用于配置推迟的时间的,单位秒。

慢磁盘的master,可能需要考虑使用diskless传送。

3.2 启动master

 

# /usr/local/bin/redis-server /usr/local/etc/redis_6379.conf# /usr/local/bin/redis-cli127.0.0.1:6379> set country Japan(error) NOREPLICAS Not enough good slaves to write.127.0.0.1:6379>

 

可见,由于slave没启动,不满足在10秒内,至少1个slave ping过master的条件,故出错。

3.3 启动slave

 

# /usr/local/bin/redis-server /usr/local/etc/redis_6389.conf# /usr/local/bin/redis-cli127.0.0.1:6379> set country JapanOK

 

3.4 停止redis

 

# /usr/local/bin/redis-cli -p 6389 shutdown# /usr/local/bin/redis-cli -p 6379 shutdown
4. cluster + master-slave模式Cluster能做什么: 自动的将数据集分布到不同节点上;当一部分节点故障时,能够继续服务; 数据分布:redis不使用一致性hash,而是另一种形式的sharding:一共用16384个hash slot,它们分布在不同的节点上(例如节点A包含0-4999,节点B包含5000-9999,节点C包含10000-16384)。key被映射到hash slot上。比如,key_foo被映射到slot 1000,而slot 1000存储于节点A上,那么key_foo(以及它的值)将存储于节点A上。另外,你可以让不同的key映射到同一个slot上,方法是使用{}把key的一部分括起来;在key到slot的映射过程中,只考虑{}之内的部分。例如,this{foo}key和another{foo}key在映射中,只有"foo"被考虑,所以它们一定映射到相同的slot。
TCP port:每个节点使用两个端口:服务client的端口;集群bus端口。集群bus端口 = 服务client的端口 + 10000。配置过程中只需指定服务client的端口,后者由redis根据这个规则自行计算。集群bus端口用于故障侦测、配置更新、fail over鉴权等;服务client的端口除了用于服务客户,还用于节点间迁移数据。 服务client的端口必须对client和所有其他节点开放;集群bus端口必须对所有其他节点开放。
一致性保证:redis集群不能保证强一致性。也就是说,在特定情况下,redis可能丢失写入的数据(虽然已经回应client:已成功写入)。 异步复制导致数据丢失:1. 用户写入master节点B;2. master节点B向client回复OK;3. master节点B把写入数据复制到它的slave。可见,因为B不等slave确认写入就向client回复OK,若master节点B在2之后故障,就会导致数据丢失。网络分裂导致数据丢失:例如有A,B,C三个master,它们的slave分别是A1,B1,C1;如果发生网络分裂,B和客户端分到一侧。在cluster-node-timeout之内,客户端可以继续向B写入数据;当超过cluster-node-timeout时,分裂的另一侧发生fail over,B1当选为master。客户端向B写的数据就丢失了。redis(非cluster模式)本身也可能丢失数据:1. RBD定期快照,导致快照周期内的数据丢失;2. AOF,虽然每一个写操作都记入log,但log是定期sync的,也可能丢失sync周期内的数据。
在下文,节点和实例是可以互换的名词(因为,我是在同一个host上试验集群,一个节点由一个实例代替)。

我将在同一台机器上试验redis集群模式,为此,我需要创建6个redis实例,其中3个master,另外3个是slave。它们的端口是7000-7005。

4.1 配置redis集群:

 

# cp /usr/local/etc/redis.conf /usr/local/etc/redis_7000.conf # vim /usr/local/etc/redis_7000.conf daemonize yes port 7000 pidfile /var/run/redis/pid/redis_7000.pid logfile "/var/run/redis/log/redis_7000.log" dbfilename redis_7000.rdb dir /var/run/redis/rdb/ min-slaves-to-write 0 cluster-enabled yes cluster-config-file /var/run/redis/nodes/nodes-7000.conf cluster-node-timeout 5000 cluster-slave-validity-factor 10 repl-ping-slave-period 10

这里我把min-slave-to-write改为0,为了后文验证fail over之后,仍能够读写(否则,master crash之后,slave取代它成为master,但它没有slave,故不能读写)。

 

 

 

# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7001.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7002.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7003.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7004.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7005.conf# sed -i -e 's/7000/7001/' /usr/local/etc/redis_7001.conf# sed -i -e 's/7000/7002/' /usr/local/etc/redis_7002.conf# sed -i -e 's/7000/7003/' /usr/local/etc/redis_7003.conf# sed -i -e 's/7000/7004/' /usr/local/etc/redis_7004.conf# sed -i -e 's/7000/7005/' /usr/local/etc/redis_7005.conf

 

4.2 启动redis实例

 

# mkdir -p /var/run/redis/log# mkdir -p /var/run/redis/rdb# mkdir -p /var/run/redis/pid# mkdir -p /var/run/redis/nodes


 

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7000.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7002.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7003.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7004.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7005.conf


 

现在,在每个实例的log里,可以看见类似这么一行(当然,后面那个串16进制数各不相同):

 

3125:M 12 Jul 15:24:16.937 * No cluster configuration found, I'm b6be6eb409d0207e698997d79bab9adaa90348f0

 

事实上,那串16进制数就是每个redis实例的ID。它在集群的环境下,唯一标识一个redis实例。每个redis实例通过这个ID记录其他实例,而不是通过IP和port(因为IP和port可以改变)。我们这里所说的实例,就是集群中的节点,这个ID也即是Node ID。

 4.3 创建redis集群

我们使用从redis源代码中拷贝的redis-trib.rb来创建集群。它是一个ruby脚步,为了失它能够运行,还需要如下准备工作:

 

# yum install gem# gem install redis


 

现在可以创建集群了:

 

# /usr/local/bin/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005>>> Creating cluster>>> Performing hash slots allocation on 6 nodes...Using 3 masters:127.0.0.1:7000127.0.0.1:7001127.0.0.1:7002Adding replica 127.0.0.1:7003 to 127.0.0.1:7000Adding replica 127.0.0.1:7004 to 127.0.0.1:7001Adding replica 127.0.0.1:7005 to 127.0.0.1:7002M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:0-5460 (5461 slots) masterM: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:5461-10922 (5462 slots) masterM: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:10923-16383 (5461 slots) masterS: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756Can I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join...>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:0-5460 (5461 slots) masterM: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:5461-10922 (5462 slots) masterM: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:10923-16383 (5461 slots) masterM: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) master   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) master   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) master   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

最后一行:

[OK] All 16384 slots covered.

表示至少有一个master能够服务全部slot(16384个)了。可以认为集群创建成功了。从命令的输出可以看出:

实例7000 ID:b6be6eb409d0207e698997d79bab9adaa90348f0

实例7001 ID:23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9

实例7002 ID:6b92f63f64d9683e2090a28ebe9eac60d05dc756

实例7003 ID:ebfa6b5ab54e1794df5786694fcabca6f9a37f42

实例7004 ID:026e747386106ad2f68e1c89543b506d5d96c79e

实例7005 ID:441896afc76d8bc06eba1800117fa97d59453564

其中:

7000、7001和7002是master;

7000包含的slot是0-5460,slave是7003

7001包含的slot是5461-10922,slave是7004

7002包含的slot是10923-16383,slave是7005

如果不想要slave,6个实例都做master(没有备份),把"--replicas 1"去掉即可。

在继续进行之前,先看看这几项配置项的意义:

cluster-enabled:启用集群功能。cluster-config-file:集群中的实例通过这个文件持久化集群配置:其他实例及其状态等。不是让人来编辑的。redis实例在收到消息(例如其他实例状态变化的消息)时重写。cluster-node-timeout:一个节点失联时间大于这个值(单位毫秒)时,就会认为是故障(failing)的。它有两个重要意义:1. master节点失联时间大于它时,就可能被fail over到slave上;2. 一个节点如果和大多数master失联的时间大于它时,就停止接收请求(比如由于网络分裂,一个master被隔离出去,与大多数master失联,当失联时间大于这个值时,它就停止工作,因为在分裂的那一边可能已经fail over到slave上了)。cluster-slave-validity-factor:说来话长,一个slave当发现它的数据太老时,它就不会进行fail over。如何确定它的数据年龄呢?有两个检查机制:机制1. 假如有多个slave都可以fail over,它们就会交换信息,根据replication offset (这个值能反映从master得到数据的多少)来确定一个级别(rank),然后根据rank来延迟fail over。Yuanguo:可能是这样的: 根据replication offset确定哪个slave的数据比较新(从master得到的多),哪个slave的数据比较旧(从master得到的少),然后排列出一个rank。数据新的slave尽快fail over,数据老的slave延迟fail over,越老延迟的越长。机制1和这个配置项无关。 机制2. 每个slave记录它与master最后一次交互(PING或者是命令)以来逝去的时间。若这个时间过大,则认为数据是老的。如何确定这个时间是否“过大”呢?这就是这个配置项的作用,若大于

(node-timeout * slave-validity-factor) + repl-ping-slave-period

就认为过大,数据是老的。

repl-ping-slave-period:slave向master发送PING的频率,单位是秒。 以下这两项,我们没有设置:
cluster-migration-barrier:假如有一个master有3个slave,另一个master没有任何slave。这时,需要把第一个master的slave 迁移给第二个master。但第一个master把自己的slave迁移给别人时,自己必须保有一定个数的slave。保有个数就是cluster-migration-barrier。例如,把这个值设置为3时, 就不会发生slave迁移了,因为迁移之后保有数小于3。所以,你想禁止slave迁移,把这个数设置很大即可。cluster-require-full-coverage:若设置为yes,一旦存在hash slot没被覆盖,则集群停止接收请求。在这种情况下,若集群部分宕机,导致一些slot没有被覆盖,则整个集群变得不可用。你若希望在一些节点宕机时,被覆盖的那些slot仍能服务,把它设置为no。4.4 测试集群4.4.1 redis-cli的集群模式

 

# /usr/local/bin/redis-cli -p 7000127.0.0.1:7000> set country China(error) MOVED 12695 127.0.0.1:7002127.0.0.1:7000> get country(error) MOVED 12695 127.0.0.1:7002127.0.0.1:7000> exit# /usr/local/bin/redis-cli -p 7002127.0.0.1:7002> set country ChinaOK127.0.0.1:7002> get country"China"127.0.0.1:7002> set testKey testValue(error) MOVED 5203 127.0.0.1:7000127.0.0.1:7002> exit# /usr/local/bin/redis-cli -p 7000127.0.0.1:7000> set testKey testValueOK127.0.0.1:7000> exit

 

某一个特定的key,只能由特定的master来服务?

不是的。原来,redis-cli需要一个 -c 来表示cluster模式。使用cluster模式时,可以在任何节点(master或者slave)上存取数据:

 

# /usr/local/bin/redis-cli -c -p 7002127.0.0.1:7002> set country AmericaOK127.0.0.1:7002> set testKey testValue-> Redirected to slot [5203] located at 127.0.0.1:7000OK127.0.0.1:7000> exit# /usr/local/bin/redis-cli -c -p 7005127.0.0.1:7005> get country-> Redirected to slot [12695] located at 127.0.0.1:7002"America"127.0.0.1:7002> get testKey-> Redirected to slot [5203] located at 127.0.0.1:7000"testValue"127.0.0.1:7000> set foo bar-> Redirected to slot [12182] located at 127.0.0.1:7002OK127.0.0.1:7002> exit

事实上,redis-cli对cluster的支持是比较基础的,它只是利用了节点能够根据slot进行重定向的功能。例如在上面的例子中,在实例7002上设置testKey时,它计算得到对应的slot是5203,而5203由实例7000负责,所以它重定向到实例7000。

 

一个更好的客户端应该能够缓存 hash slot到节点地址的映射,然后就可以直接访问正确的节点,免于重定向。这个映射只有在集群配置发生变化的时候才需要刷新,例如,fail over之后(slave代替了master,故节点地址改变),或者管理员添加/删除了节点(hash slot分布发生变化)。

4.4.2 ruby客户端:redis-rb-cluster

安装

 

# cd /usr/local/# wget https://github.com/antirez/redis-rb-cluster/archive/master.zip# unzip master.zip# cd redis-rb-cluster-master

 

测试

安装完成之后,进入redis-rb-cluster-master目录,发现里面有一个example.rb,执行它:

 

# ruby example.rb 127.0.0.1 7000123456^C


 

它循环向redis集群set这样的键值对:

foo1 => 1

foo2 => 2

......

可通过redis-cli来验证前面的执行结果。

4.4.3 hash slots在节点间resharding

为了展示resharding过程中IO不间断,再打开一个终端,运行

 

# ruby example.rb 127.0.0.1 7000;

 

同时在原终端测试resharding:

 

# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.How many slots do you want to move (from 1 to 16384)?2000                    <----  迁移多少hash slot?What is the receiving node ID? 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9      <----  迁移目的地? 7001实例Please enter all the source node IDs.  Type 'all' to use all the nodes as source nodes for the hash slots.  Type 'done' once you entered all the source nodes IDs.Source node #1:all                                                           <----  迁移源?allDo you want to proceed with the proposed reshard plan (yes/no)? yes          <----  确认

 

迁移过程中,另一个终端的IO持续没有中断。迁移完成之后,检查现在hash slot的分布:

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

可见slot的分布已经发生变化:

7000:4461个

1000-5460

7001:7462个

0-999

5461-11922

7002:4461个

11923-16383

4.4.4 fail over

在测试fail over之前,先看一下包含在redis-rb-cluster中的另一个工具:consistency-test.rb;它是一个一致性检查器:累加变量,然后检查变量的值是否正确。

 

# ruby consistency-test.rb 127.0.0.1 7000198 R (0 err) | 198 W (0 err) |685 R (0 err) | 685 W (0 err) |1174 R (0 err) | 1174 W (0 err) |1675 R (0 err) | 1675 W (0 err) |2514 R (0 err) | 2514 W (0 err) |3506 R (0 err) | 3506 W (0 err) |4501 R (0 err) | 4501 W (0 err) |

 

两个括号中的(N err)分别代表IO错误数,而不是数据不一致。数据不一致在最后一个列打印(上例中没有数据不一致现象)。为了演示数据不一致现象,我修改consistency-test.rb脚步,把key打印出来,然后在另一个终端中,通过redis-cli更改key的值。

 

# vim consistency-test.rb            # Report            sleep @delay            if Time.now.to_i != last_report                report = "#{@reads} R (#{@failed_reads} err) | " +                         "#{@writes} W (#{@failed_writes} err) | "                report += "#{@lost_writes} lost | " if @lost_writes > 0                report += "#{@not_ack_writes} noack | " if @not_ack_writes > 0                last_report = Time.now.to_i+               puts key                puts report            end

 

运行脚本,我们可以看见脚本操作的每个变量的key:

 

# ruby consistency-test.rb 127.0.0.1 700081728|502047|15681480|key_8715568 R (0 err) | 568 W (0 err) |81728|502047|15681480|key_33731882 R (0 err) | 1882 W (0 err) |81728|502047|15681480|key_893441 R (0 err) | 3441 W (0 err) |

 

在另一终端,修改key:81728|502047|15681480|key_8715的值:

 

127.0.0.1:7001> set 81728|502047|15681480|key_8715 0-> Redirected to slot [12146] located at 127.0.0.1:7002OK127.0.0.1:7002>

 

然后,可以看见consistency-test.rb检查出值不一致:

 

# ruby consistency-test.rb 127.0.0.1 700081728|502047|15681480|key_8715568 R (0 err) | 568 W (0 err) |81728|502047|15681480|key_33731882 R (0 err) | 1882 W (0 err) |81728|502047|15681480|key_89......81728|502047|15681480|key_28417884 R (0 err) | 7884 W (0 err) |81728|502047|15681480|key_3088869 R (0 err) | 8869 W (0 err) | 2 lost |81728|502047|15681480|key_67719856 R (0 err) | 9856 W (0 err) | 2 lost |
变量的值应该为2,但是丢失了(被我改掉了)。

4.4.4.1 自动fail over

当一个master crash了,一段时间后(前面配置的的5秒)会自动fail over到它的slave上。

在一个终端上,运行一致性检查器consistency-test.rb(删掉了打印key的语句)。然后在另一个终端上模拟crash一个master:

 

# ./redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001        <---- 可见7001是一个master   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.# /usr/local/bin/redis-cli -p 7001 debug segfault                 <---- 模拟7001 crash         Error: Server closed the connection# ./redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004        <---- 7001 fail over到7004;现在7004是master,并且没有slave   slots:0-999,5461-11922 (7462 slots) master   0 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

在一致性检查一侧,可以看见在fail over过程中出现了一些IO错误。

 

 

7379 R (0 err) | 7379 W (0 err) |8499 R (0 err) | 8499 W (0 err) |9586 R (0 err) | 9586 W (0 err) |10736 R (0 err) | 10736 W (0 err) |12416 R (0 err) | 12416 W (0 err) |Reading: Too many Cluster redirections? (last error: MOVED 11451 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 11451 127.0.0.1:7001)13426 R (1 err) | 13426 W (1 err) |Reading: Too many Cluster redirections? (last error: MOVED 5549 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 5549 127.0.0.1:7001)13426 R (2 err) | 13426 W (2 err) |Reading: Too many Cluster redirections? (last error: MOVED 9678 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 9678 127.0.0.1:7001)13427 R (3 err) | 13427 W (3 err) |Reading: Too many Cluster redirections? (last error: MOVED 10649 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 10649 127.0.0.1:7001)13427 R (4 err) | 13427 W (4 err) |Reading: Too many Cluster redirections? (last error: MOVED 9313 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 9313 127.0.0.1:7001)13427 R (5 err) | 13427 W (5 err) |Reading: Too many Cluster redirections? (last error: MOVED 8268 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 8268 127.0.0.1:7001)13428 R (6 err) | 13428 W (6 err) |Reading: CLUSTERDOWN The cluster is downWriting: CLUSTERDOWN The cluster is down13432 R (661 err) | 13432 W (661 err) |14786 R (661 err) | 14786 W (661 err) |15987 R (661 err) | 15987 W (661 err) |17217 R (661 err) | 17217 W (661 err) |18320 R (661 err) | 18320 W (661 err) |18737 R (661 err) | 18737 W (661 err) |18882 R (661 err) | 18882 W (661 err) |19284 R (661 err) | 19284 W (661 err) |20121 R (661 err) | 20121 W (661 err) |21433 R (661 err) | 21433 W (661 err) |22998 R (661 err) | 22998 W (661 err) |24805 R (661 err) | 24805 W (661 err) |

 

注意两点:

 

fail over完成之后,IO错误数停止增加,集群可以继续正常服务。没有出现不一致错误。master crash是可能导致数据不一致的(slave的数据落后于master,master crash后,slave取代它,导致使用落后的数据),但这种情况不是非常容易发生,因为master完成新的写操作时,几乎在回复客户端的同时,就向slave同步了。但不代表不可能出现。

 

重启7001,它将变成7004的slave:

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf# ./redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001             <---- 7001已变成7004的slave   slots: (0 slots) slave   replicates 026e747386106ad2f68e1c89543b506d5d96c79e[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

4.4.4.2 手动fail over

有的时候,用户可能想主动fail over,比如,想升级某个master,最好让它变成slave,这样能减小对集群可用性的影响。这就需要手动fail over。

手动fail over必须在slave上执行:

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001                <---- 7001是7004的slave   slots: (0 slots) slave   replicates 026e747386106ad2f68e1c89543b506d5d96c79e[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.# /usr/local/bin/redis-cli -p 7001 CLUSTER FAILOVER                       <---- 在slave 7001上执行fail overOK# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004               <---- 7004变成7001的slave   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001               <---- 7001变成master   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

手动fail over的过程:

 

slave告诉master,停止处理用户请求;master停止处理用户请求,并回复slave它的replication offset;slave等待它的replication offset和master的匹配;即等待,直到从master接收到所有数据。这时候master和slave的数据一致,并且master不接收新数据;slave开始fail over:从大多数master获取一个配置epoc(配置变更版本号,应该是cluster-config-file包含的信息的版本号),并广播新配置(在新配置中,slave已经变成master);老的master接收到新配置,重新开始处理用户请求:把请求重定向到新master;它自己已经变成slave;

 

fail over命令有两个选项:

 

FORCE:上面fail over的过程中,需要master参与。若master处于失联状态(网络故障或者master崩溃了,但没完成自动fail over),加上FORCE选项,则fail over不与master进行握手,而是直接从第4步开始。TAKEOVER:上面fail over的过程中,需要大多数master的授权并有大多数master产生一个新的配置变更版本号。有时,我们不想与其他master达成一致,而直接fail over,则需要TAKEOVER选项。一个真实的用例是:master在一个数据中心, 所有slave在另一个数据中心,当所有master宕机或者网络分裂时,集中地把所有处于另一数据中心的slave提升为master,来达到数据中心切换的目的。

 

4.4.5 添加节点4.4.5.1 添加master

 

# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7006.conf# sed -i -e 's/7000/7006/' /usr/local/etc/redis_7006.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7006.conf             <---- 1. 拷贝、修改conf,并启动一个redis实例# /usr/local/bin/redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000    <---- 2. 把实例加入集群>>> Adding node 127.0.0.1:7006 to cluster 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.>>> Send CLUSTER MEET to node 127.0.0.1:7006 to make it join the cluster.[OK] New node added correctly.                                           <---- 新节点添加成功# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000                      <---- 3. 检查>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006               <---- 新节点没有任何slot;所以需要手动reshard   slots: (0 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000                    <---- 4. reshard>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots: (0 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.How many slots do you want to move (from 1 to 16384)? 3000                  <---- 迁移3000个slotWhat is the receiving node ID? 6147326f5c592aff26f822881b552888a23711c6     <---- 目的地是新加的节点Please enter all the source node IDs.  Type 'all' to use all the nodes as source nodes for the hash slots.  Type 'done' once you entered all the source nodes IDs.                    <---- 源是7001;由于上次reshard,它的slot非常多,所以迁走3000Source node #1:23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9Source node #2:done......    Moving slot 7456 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7457 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7458 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7459 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7460 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9Do you want to proceed with the proposed reshard plan (yes/no)? yes         <---- 确认# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000                         <---- 5. 再次检查>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006                   <---- 新加的节点有3000个slot   slots:0-999,5461-7460 (3000 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

4.4.5.2 添加slave

 

1. 拷贝、修改conf,并启动一个redis实例 # cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7007.conf # sed -i -e 's/7000/7007/' /usr/local/etc/redis_7007.conf # /usr/local/bin/redis-server /usr/local/etc/redis_7007.conf 2. 作为slave添加,并指定其master [root@localhost ~]# /usr/local/bin/redis-trib.rb add-node --slave --master-id 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7007 127.0.0.1:7000 >>> Adding node 127.0.0.1:7007 to cluster 127.0.0.1:7000 >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 0 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:7007 to make it join the cluster. Waiting for the cluster to join. >>> Configure node as replica of 127.0.0.1:7006. [OK] New node added correctly. 3. 检查 # /usr/local/bin/redis-trib.rb check 127.0.0.1:7000 M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: 6d8675118da6b492c28844395ee6915506c73b3a 127.0.0.1:7007 <---- 新节点成功添加,并成为7006的slave slots: (0 slots) slave replicates 6147326f5c592aff26f822881b552888a23711c6 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 1 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.

 


上面添加slave时,指定了其master。也可以不指定master,这样,它将随机地成为一个master的slave;然后,可以把它迁移为指定master的slave(通过CLUSTER REPLICATE命令)。另外,还可以通过作为一个空的master添加,然后使用CLUSTER REPLICATE命令把它变为slave。

 

4.4.6 删除节点

删除之前,先看看当前的结构

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: 6d8675118da6b492c28844395ee6915506c73b3a 127.0.0.1:7007   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots:0-999,5461-7460 (3000 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
可见:

 

master slave

7000 7003

7001 7004

7002 7005

7006 7007

我们将删掉7007(slave)和7002(master).

4.4.6.1 删除slave节点

删除slave(7007)比较容易,通过del-node即可:

 

# /usr/local/bin/redis-trib.rb del-node 127.0.0.1:7000 6d8675118da6b492c28844395ee6915506c73b3a>>> Removing node 6d8675118da6b492c28844395ee6915506c73b3a from cluster 127.0.0.1:7000>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006       <---- 7006现在没有slave了   slots:0-999,5461-7460 (3000 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

4.4.6.2 删除master节点

删除master节点之前,必须确保master是空的(没有任何slot),这可以通过reshard来完成。然后才能删除master。

 

# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000 <---- reshard >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 0 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4461 <---- 我们计划把7002清空,所以需要迁移其所有slot,4461个 What is the receiving node ID? 6147326f5c592aff26f822881b552888a23711c6 <---- 目的地7006 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:6b92f63f64d9683e2090a28ebe9eac60d05dc756 <---- 源7002 Source node #2:done ...... Moving slot 16382 from 6b92f63f64d9683e2090a28ebe9eac60d05dc756 Moving slot 16383 from 6b92f63f64d9683e2090a28ebe9eac60d05dc756 Do you want to proceed with the proposed reshard plan (yes/no)? yes

 

检查7002是否已经被清空:

 

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005           <---- 7002的slave,变成7006的了 (若设置了cluster-migration-barrier,如何?)   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002           <---- 7002已经被清空   slots: (0 slots) master   0 additional replica(s)                                           <---- 并且它的slave也不见了(因为没有数据,slave是浪费) !!!!!!S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots:0-999,5461-7460,11923-16383 (7461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

现在可以删除7002了:

 

 

# /usr/local/bin/redis-trib.rb del-node 127.0.0.1:7000 6b92f63f64d9683e2090a28ebe9eac60d05dc756>>> Removing node 6b92f63f64d9683e2090a28ebe9eac60d05dc756 from cluster 127.0.0.1:7000>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.

看看现在的结构:

 

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots:0-999,5461-7460,11923-16383 (7461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
可见:

 

master slave

7000 7003

7001 7004

7006 7005

4.4.7 slave迁移

现在的拓扑结构是:

 

 

master slave

7000 7003

7001 7004

7006 7005

我们可以通过命令把一个slave分配给别的master:

 

# /usr/local/bin/redis-cli -p 7003 CLUSTER REPLICATE 6147326f5c592aff26f822881b552888a23711c6    <---- 让7003做7006的slaveOK# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000           <---- 7000没有slave   slots:1000-5460 (4461 slots) master   0 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005           <---- 7005是7006的slave   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003           <---- 7003是7006的slave   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006           <---- 7006有两个slave   slots:0-999,5461-7460,11923-16383 (7461 slots) master   2 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

另外,除了手动迁移之外,redis还会自动迁移slave。前面介绍配置项cluster-migration-barrier时也简单解释过:

 

redis在特定的时刻,试图把拥有最多slave的master的slave迁移给没有slave的master;有了这一机制,你可以简单地向系统添加一些slave,不用指定它们的master是谁。当有master失去了所有slave时(slave一个个都故障了),系统就会自动迁移给它。cluster-migration-barrier:设置自动迁移时,一个master最少保留几个slave。例如,这个值设置为2,我有3个slave,你没有时,我会给你一个;我有2个slave,你没有时,我不会给你。

 

4.4.8 升级节点4.4.8.1 升级slave

 

停掉;使用新版本的redis启动;

 

4.4.8.2 升级master

 

手动fail over到一个slave上;等待master变为slave;然后,作为slave升级(停掉,使用新版redis启动);再fail over回来(可选);

 

4.4.9 集群迁移

暂时没必要;

4.4.10 Stop/Start集群

Stop集群:只需一个个停止各个实例

 

# /usr/local/bin/redis-cli -p 7000 shutdown# /usr/local/bin/redis-cli -p 7001 shutdown# /usr/local/bin/redis-cli -p 7003 shutdown# /usr/local/bin/redis-cli -p 7004 shutdown# /usr/local/bin/redis-cli -p 7005 shutdown# /usr/local/bin/redis-cli -p 7006 shutdown# ps -ef | grep redisroot      26266  23339  0 17:24 pts/2    00:00:00 grep --color=auto redis[root@localhost ~]#

 

Start集群:只需一个个启动各个实例(没必要再使用redis-trib.rb create)

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7000.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7003.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7004.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7005.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7006.conf# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)M: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots:0-999,5461-7460,11923-16383 (7461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)S: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots: (0 slots) slave   replicates ebfa6b5ab54e1794df5786694fcabca6f9a37f42S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
测试IO是否正常:

 

# ruby consistency-test.rb 127.0.0.1 7000109 R (0 err) | 109 W (0 err) |661 R (0 err) | 661 W (0 err) |1420 R (0 err) | 1420 W (0 err) |2321 R (0 err) | 2321 W (0 err) |……

 

5. 小结

本文主要记录了redis的配置过程(包括单机模式,主备模式和cluster模式);在这个过程中,尽量对redis系统的工作方式进行解释,即使不算详尽。希望可以作为入门知识

0 0