Redis学习笔记

来源:互联网 发布:林志颖淘宝店 编辑:程序博客网 时间:2024/05/17 03:43

根据公司redis的使用经验,把自己的一些理解总结成笔记。



分布式系统:其实就是有多个数据库,需要进行垮裤操作的分布式应用。

分布式分系统为何产生:
第一,比如互联网环境的应用系统,业务需求复杂,必须进行系统级别的垂直拆分,保证各个业务的业务清晰,各自部署对外提供服务。
第二,针对的用户群体广泛,就必然存在着高并发的问题,就会给我们单个节点(物理机器)的服务器造成非常巨大的压力。基于这个角度,我们需要把不同的应用系统部署到不同的节点上,也需要把不同的数据,分散到不同的物理节点上。
第三,互联网时代,数据量大,我们才需要对数据进行“分布式”处理。(分布式随之产生的的一系列问题,如数据的一致性、安全性、可扩展性、服务器的高可用、高承载能力等。
第四,在分布式系统中,虽然我们的mysql集群已经做了读写分离,垂直/水平切分,但是读操作的并发量和数据量都非常大,每次都从数据库中查询太慢,而从内存级别的缓存库中查询非常快。于是分布式缓存应运Redis而生。并且由于不能让任何一台服务器出现单点问题,也就是缓存服务器如果宕机,那就直接从mysql数据库查询,速度非常慢,因此Redis提供了分布式的部署方式,Redis3.0之后提供分布式集群部署。例如,一个订单的简介,必须非常快的查出来,因此订单简介这种信息必须放在缓存中,而且需要保证缓存中数据与DB中数据保持一致。

互联网行业目前最主流最核心的key-value类型的nosql数据库:
1.Redis、Memcached、SSDB这三种类型的缓存数据库可能是我们业界一般使用最多的,也是业界公认的主流分布式缓存存储方案。
2.这三者都是k-v存储方案,各自都会有优缺点,权衡而言,我个人觉得Redis的使用可能较其他俩者更为广泛一些。
3.Redis特点突出,支持多种数据类型,如String、Hash、Set、List,zset等,并且有高可用的解决方案和集群方案(3.0以后),支持水平扩容。也就是说满足大部分企业的需求,Memcached、SSDB相对解决方案不算那么完善。
4.Redis与Memchached对比,Redis是单线程操作,而Memchached为多线程操作。同一时间memcached的客户端api支持并发访问memcached服务器里面的内容。而Redis是单线程,同一时间Redis的客户端api不支持并发访问Redis服务器里面的内容。但是Redis可以开多个实例,每个实例可以接收外部的一个请求,这就相当于支持并发,并且因为访问的是不同实例而非常安全。但是memcached虽然支持并发但是需要控制锁的机制,所以memcached虽然支持并发访问但是性能还是没有单线程的Redis强。
5.Memcached支持事务,可以对一个batch的修改一起成功或一起失败。Redis不支持事务。Redis的事务解决方案是使用Lua脚本,其原理是写batch的时候,把写操作都先写到Lua脚本里面,然后让Lua一起更新到redis,因此如果成功则一起成功,如果失败也是整个Lua脚本一起失败,这也就保证了Redis的事务。
6. Redis与SSDB对比,SSDB是基于Google性能极高的LevelDB作为存储引擎去架构的,而且还可以与Redis完美整合,一般我们会使用相互结合的方案,因为SSDB虽然性能高大上,但是高可用性目前并不如Redis。Redis对于SSDB的劣势持久化文件rdb或aof的时候,性能比较慢,如果数据量小,redis写性能还不多;但是如果写的数据量非常大,由于redis是内存级的缓存系统,所以大数据写操作导致redis的持久化文件时候性能比较低。SSDB的写操作效率非常高,因此很多时候Redis和SSDB结合,Redis支持高并发读,SSDB支持高并发写,这样就真是读写效率都高了。

Redis的优缺点
Redis是以key-value形式存储,和传统的关系型数据库不一样,不一定遵循传统数据库的一些基本要求(非关系型的、分布式的、开源的。水平可扩展的)
优点:
1.对数据高并发读写
2.对海量数据的高效率存储和访问
3.对数据的可扩展性和高可用性
缺点:
1.redis无法支持事务(memcached支持事务,redis如果想支持事务,需要和Lua结合使用)
2.无法做到太复杂的关系数据库模型
3.Redis是以key-value store 存储,data structure service 数据结构服务器。键可以包含:(string)字符串,哈希,(list)链表,(set)集合,(zset)有序集合。这些数据集合都支持push/pop、add/remove及取交集和并集以及更丰富的操作,redis支持各种不同的方式排序,为了保证效率,
4.Redis数据都是缓存在内存中,但是一旦服务器关机重启数据就都没了,因此它也可以周期性的把更新的数据写入磁盘或者把修改操作写入追加到文件记。redis支持两种持久化文件的方式,rdb(每隔n秒钟就把内存中的数据同步到.rdb文件中,也叫snapshot),aof(每次的redis写操作都会打印日志到.aof文件中,也叫日志),在实际开发项目中往往为了保证数据安全而使用aof的文件持久化方式,但是性能比较低,每次redis写操作都要写aof日志,也因此需要与SSDB结合来解决写的性能问题。


Redis目前业界提供三种方式去应用在生产环境
第一种:使用主从复制的方式

Master/Slave replication feature             
1.Master可以有多个Slave
2.多个Slave可以连接同一个Master外,还可以连接到其他的Slave上
3.主从replication cannot block sync data of Master,Master可以同事继续处理client的request.
4.Master服务器支持读写,Slave服务器只读,缺点是一点Master宕机,无法写操作了。

Master/Slave replication process
1.slave 与 master建立连接,发送sync同步命令
2.master会开启一个后台进程,将redis缓存数据库snapshot保存到文件中,同事master的主进程会开始收集新的写命令并缓存。
3.后台完成保存后,就将文件发送给slave
4.slave将此文件保存在slave服务器上

how to config Master/Slave
step 1. 在Slave节点上,打开redis.conf配置文件,
step 2. 在redis.conf上面写一句话即可,slaveof <Master IP> <Master Port>
step 3. info命令查看redis缓存服务器的数据库的情况

 
第二种:使用高可用哨兵、Keepalived的方式

HA哨兵架构指的是在主从复制的基础上加了一个哨兵Redis服务器,监控所有的Redis服务器的心跳(每隔n秒钟看这个节点机器是否存活),一旦Master宕机,可以选出一个Slave服务器作为Master继续工作。哨兵模式是Redis内置的架构模式。

由于哨兵有单点问题,如果哨兵宕机,那么就无法保障HA,因此production环境上很多时候在每个Slave上都部署了哨兵,这样就不会有单点问题了。但是这依然是没有Redis3.0的集群方案好用。
      
Keepalived的作用是检测web服务器的状态,如果有一台web服务器死机,或工作出现故障,Keepalived将检测到,并将有故障的web服务器从系统中剔除,当web服务器工作正常后Keepalived自动将web服务器加入到服务器群中,这些工作全部自动完成,不需要人工干涉,需要人工做的只是修复故障的web服务器。但是在宕机的一瞬间,Keepalived还是会有闪断和延时的,还是不行。所以引出了集群的架构方式。

第三种:Redis3.0集群方式
 
        自Redis 3.0之后,引入了Redis的集群模式,Redis Cluster中有一个16384长度的槽的概念,他们的编号为0、1、2、3……16382、16383。这个槽是一个虚拟的槽,并不是真正存在的。正常工作的时候,Redis Cluster中的每个Master节点都会负责一部分的槽,当有某个key被映射到某个Master负责的槽,那么这个Master负责为这个key提供服务,至于哪个Master节点负责哪个槽,这是可以由用户指定的,也可以在初始化的时候自动生成(redis-trib.rb脚本)。这里值得一提的是,在Redis Cluster中,只有Master才拥有槽的所有权,如果是某个Master的slave,这个slave只负责槽的使用,但是没有所有权。

       Redis集群官方推荐的方式有多个Master节点,每个Master节点下挂一个或多个Slave节点。例如:集群中有90条数据,分别映射到90个slot中,通过Redis的Hash Slots分配,Master1管理30个slot,Master2管理30个slot,Master3管理30个slot。如果新加节点,Redis Hash Slots不会重新计算,现有的数据在哪个slot里面就还在那个slot里面,只是哪个Redis Master Master节点管理哪些slot会发生变化。例如新增1个节点,Master1由以前管理30个slot变成管理20个slot,Master2由以前管理30个slot变成管理20个slot,Master3由以前管理30个slot变成管理20个slot,新增的Master4管理30个slot。那么之前节点现在少管理的那些数据要传到新的节点上即可。但是传的过程中也是有个中间状态,在没有传完数据之前,redis clinet访问还是访问老的节点,知道数据完全转移完毕才回去访问新的节点。
      如果删除节点,原理同上, Redis Hash Slots不会重新计算,现有的数据在哪个slot里面就还在那个slot里面,只是哪个Redis Master Master节点管理哪些slot会发生变化。

      在Redis3.0之前一直用哨兵,3.0之后全都使用集群。集群至少是6个节点,3个Master, 3个Slave.

How to config redis cluster
(1)port  ** , 一定要在redis cluster中的每个节点上,redis.conf配置端口
(2)bind ip   , 一定要redis cluster中的每个节点上,redis.conf绑定当前机器ip
(3)dir /usr/local/redis-cluster/  , 一定要redis cluster中的每个节点上指定数据文件存储的位置。
(4)cluster-enabled 设置成yes
(5)安装ruby,Redis集群使用ruby写的,因此需要在linux上安装ruby
     <1>yum install ruby
     <2>yum install rubygems
     <3>gem install redis
 (6)分别启动每个redis服务器即可 (现在这6台redis机器没有任何关系,需要我们执行集群命令)
(7)找到redis3.0安装目录,执行redis-trib.rh
(8)cd /usr/local/local/redis3.0/src
(9)./redis-trib.rb create -replicas 1 ip:port ip:port ip:port ip:port ip:port ip:port      (把每个节点的ip和端口号都写上,这里的1表示比例,一个Master和Slave的比例,所以Redis集群中每个Master和Slave的比例一定是一样的,不可能出现某个Master带了2个Slave,另外一个Master带了4个Slave.)
(10)登录某个客户端 ./redis-cli -c -h -p (-c表示集群cluster模式, -h表示ip, -p表示端口号) 。 如:/redis-cli -c -h 192.168.1.171 -p 7071
(11)cluster info 查看集群信息   cluster notes 查看集群节点信息
(12)关闭集群则需要逐个的节点关闭

How to Write/Read data from Redis Cluster

我们打开Master1的客户端,写入一条数据,这条数据未必会写到Master1上面,而是通过Redis Hash Slots转化,得到一个redis cluster中的某一台,然后存进去。这种Hash Slots可以理解为所及的负载均衡,减少Redis集群的压力,防止某一个机器特别忙,其他都很闲的情况。

例如:
连接Redis Master1的客户端:
set name z3 (数据被自动存到Master17)
keys name (查询当前Master1上面,没有name这个数据)
get name (通过Redis Hash Slots会把这个请求转到对应的Master17上,取出name的数据,返回回来)
切换到Redis Master17的客户端:
keys name (查询当前Master17上的数据,存在name这条数据)

去Slave上查询也是一样的,只是需要先敲readonly的命令,用java api客户端的时候,没有问题,可以直接使用。不用在意,没准以后的Redis版本会改


How to add Node
加节点(add master node and related slave node): 配置文件中部署即可,但是在真实项目中,虽然已经部署成功,但是在application层面上需要在Spring的配置文件applicationContext.xml上加上对应新增节点。因此,这种方式必须要重启tomcat,这不现实!
所以,实际生产环境中的集群管理,一定要使用Zookeeper,实现在线的动态增加节点,Spring的配置文件是由Zookeeper统一管理。所以在加节点的时候,新的Redis节点会去Zookeeper注册,并且Zookeeper会更新znode,并且会watch通知其他所有节点,并且会自动更新spring的配置文件applicationContext.xml, 不需要重启tomcat。

Redis与SSDB
SSDB的API和Redis非常相似,甚至SSDB的client可以直接访问redis服务器节点中的内容。(也就是我们的service调用的SSDB的api可以访问redis服务器)
由于Redis的特性和SSDB的特性基本一致,并且各有优缺点,所以我们一般使用Redis+SSDB整合架构方式实现我们的分布式缓存存储解决方案。这是因为Redis作为内存级别的分布式缓存服务器,读操作非常快,但是写操作由于使用aof文件持久化使得每次写操作都需要写aof日志,导致写操作效率低。于是SSDB承担写的任务,因此Redis+SSDB整合,Redis负责读,SSDB负责写。
利用Redis对数据的高可用等特性,对SSDB的兼容。
利用SSDB基于LevelDB的高性能读写的特性,可以使用SSDB的写特性。

Mysql与Redis缓存的数据一致性
redis和mysql数据的同步,代码级别大致可以这样做:
读: 读redis->没有,读mysql->把mysql数据写回redis
写: 写mysql->成功,写redis


Redis安装
1.把下载好的redis-3.0.tar.gz放到linux下/user/local
2.解压文件 tar -zxvf redis-3.0.tar.gz
3.进入到redis-3.0目录下,进行编译 make(因为是源码包,必须先编译)
4.进入到src下进行安装 make install (把redis安装的linux)
5.建立两个文件夹存放redis命令和配置文件:
     mkdir -p /usr/local/redis/etc
     mkdir -p /usr/local/redis/bin
   (我们也可以直接在redis-3.0里面的的redis.conf直接操作,但是这个redis的源码包,我们不应该把项目部署的配置文件直接在源码包里面改。所以应该另外新建redis服务器的文件夹,把有用的文件从redis源码包里面copy到我们redis服务器的文件夹里)
6.把redis-3.0下的redis.conf移动到/usr/local/redis/etc下
7.把redis-3.0/src里的mkreleasehdr.sh、redis-benchmark、redis-check-aof、redis-check-dump、redis-cli、redis-server文件移动到bin目录下,    mv mkreleasehdr.sh、redis-benchmark、redis-check-aof、redis-check-dump、redis-cli、redis-server /usr/local/redis/bin (这些文件都是启动需要的脚本)
8.启动时制定配置文件: ./redis-server /usr/local/redis/etc/redis.conf (事先把redis.conf里面的daemonize改为yes,使得可以后台启动,也就是windows系统里面进程窗口最小化) (启动redis服务器需要指定redis脚本和redis的配置文件。这两个不是我们下载的redis源码包里面,而是之前我们新建的这个redis服务器的配置目录下的脚本和配置文件。)
9.验证启动是否成功:
      ps ef | grep redis
      netstat -tunpl | grep 6379
10.启动客户端
    ./redis-cli -h 服务器ip
11. 退出客户端quit
12  停止redis服务器
        service redis shutdown
( service redis start,service redis restart )


Redis的客户端对Redis缓存服务器的操作
由于Redis不支持多线程,所以当某Redis服务器只有一个实例的时候,不可能存在两个客户端的线程同时访问Redis缓存中的某个一个值,一定是有一个线程在访问,另一个线程在等,原子性的操作非常安全。如果想让Redis支持并发访问,多开一些Redis实例就好了,而且还能保证单线程的原子性操作。

Redis的每个缓存服务器节点,默认的把每个缓存库分为16个,编号分别是0~15,仅仅是逻辑上的划分,便于我们存数据的时候进行分类整理,默认登陆后会查询第0库,可以用select切换缓存库,也可以用move移动数据从一个缓存库到另外一个缓存库

打开redis客户端: 

info

1.查看当前这台redis服务器上的所有key:
        keys *
        keys invoice*
        exists newuser:99  某个key是否存在

       type aaa   查看该key是什么类型
  
2.写数据
    (1) String
        set name Alvin
        get name
        set age 30
        get age
        setnx name Jian (not exist查询一下redis缓存中是否有name这个key,如果没有则插入,如果有就不插入)
        setex time 10 12:12;12 (有效期10秒的插入,10秒过后这条记录自动删除)
        mset k1 11 k2 22 k3 33 (批量插入 )
        mget k1 k2 k3 name age (批量查询)
        incr age (递增)
        incrby age 5 (递增5)
        decr age (递减)
        decrby age 5 (递减5)
        append name rr3 (字符串追加)
        strlen name (统计字符串长度)
    (2) 插入 Hash (相当于java里面的Map)
         hset myhash key1 value1  (向hash中存值)
         hget myhash key1             (取hash中的值)
         hexists myhash key1       (判断hash是否有这个key)
         hkeys myhash                 (返回hash中所有key)
         hvals  myhash                 (返回hash中所有value)
         hgetall myhash                (返回hash中所有key,value)
         hdel myhash key1           (删除hash中指定的key)
    (3) 插入 List (双端链表结构,可以模拟Stack或Queue) 
          lpush list1 "hello"          (从头部加入元素,模拟Stack,先进后出)
          rpush list2 "world"         (从尾部加入元素,模拟Queue,先进后出)
    (4) 插入Set (string类型的无序集合,通过hashtable实现)
       sadd set1 "hello"  
       sadd set1 "hello1"
       sadd set1 "hello2"
       smembers set1               (查询set中的元素)          

    (5)插入ZSet (有序集合,插入时指定顺序编号,用于大数据量的排序)

3.删除数据
       del name
4.其他命令
       keys *  (查询当前缓存服务器中的所有内容)
       keys name* 
       select 1   (默认是0,选择缓存库1)
       move name 7  (把name这个数据放到第7个缓存库上)
      dbsize              (查看数据库key的容量)
       info                  (查看Redis服务器的数据库信息,查看部署的情况)

Redis的事务
multi
incr age
incr name
exec
在Redis中,multi就是执行批量操作的命令,如果这些命令都成功,那么没有问题,都会写到Redis的缓存数据库中。但是如果某个命令失败,Redis不会rollback,会把前面成功的命令写到Redis库中,失败的命令终止而已。无法保证数据一致性。

Redis批量执行的原理是在开启multi命令之后,之后的所有命令都写到一个queue里面,当执行exec命令之后,把queue里面的命令拿出来一个一个的执行。所以这种实现方式无法保证事务。

现在Redis的事务的实现方式是Lua
(1) yum install -y readline   (首先需要下砸readline库的支持)
(2) yum install -y readline -devel 
(3) 下载Lua, make linux , make install
(4)打开Lua的客户端:
 redis.call("set","age","30");    
local age = redis.call("get","age");   
这就相当于一个事务了,作为一个脚本一起对Redis缓存数据库进行写操作。一起成功或一起失败。

Redis的文件持久化
Redis是key-value的缓存数据库,所有的数据都是存于内存中,但是为了保证数据的安全性以及重启redis之后依然能不丢失数据,我们必须将数据持久化到文件中。

Redis有两种文件持久化的方式:rdb, aof。redis默认是rdb持文件久化,但是实际工作中一定是aof文件持久化。

  1. rdb snapshot快照,将内存中的数据以snapshot的形式写到文件中。默认存在dump.rdb中。默认是每间隔60秒就存储一次快照,但是这样很不安全,59秒一旦宕机,最近一分钟的数据就没了。
  2. aof日志文件,每执行一条命令,就写一条日志持久化到文件中。
redis默认是rdb持文件久化,如果想打开aof文件持久化,打开redis。conf,appendonly属性设置成yes即可,并且指定appendfilename,一般默认的名字是appendonly.aof


Redis解决分布式Session共享问题
当处于分布式的环境中,由于负载均衡,同一个client发过来的两次request,可能不是一个tomcat做响应。这就有一个不同session问题。redis的解决方案是将session存入redis集群中并持久化到aof文件中

Redis的java api
Redis是有个缓存服务器存储数据库。我们对这个缓存数据的CRUD都是Redis的客户端,当然我们可以使用./redis-cli,java的api就是对./redis-cli中的命令进行封装而已。导入jedis.jar包,使用即可。

Redis与Spring的整合
配置Spring配置文件applicationContext.xml,把redis集群的每个节点的bean都配置上即可。但是这样会有个问题,无法动态增加/删除节点,因为Spring每次修改配置都需要重启Spring容器,也就是重启tomcat。所以需要使用Zookeeper, 把所有的这些配置文件都放在Zookeeper管理,一旦增加节点,Zookeeper会注册到这个节点,所以Zookeeper会watch通知其他节点,并且子的那个更新applicationContext.xml文件,不需要重启服务。




附录(redis.conf):
################################## INCLUDES ###################################

# 假如说你有一个可用于所有的 redis server 的标准配置模板,
# 但针对某些 server 又需要一些个性化的设置,
# 你可以使用 include 来包含一些其他的配置文件,这对你来说是非常有用的。
#
# 但是要注意哦,include 是不能被 config rewrite 命令改写的
# 由于 redis 总是以最后的加工线作为一个配置指令值,所以你最好是把 include 放在这个文件的最前面,
# 以避免在运行时覆盖配置的改变,相反,你就把它放在后面(外国人真啰嗦)。
#
# include /path/to/local.conf
# include /path/to/other.conf

################################ 常用 #####################################

# 默认情况下 redis 不是作为守护进程运行的,默认是no,如果你想让它在后台运行,你就把它改成 yes。
# 当redis作为守护进程运行的时候,它会写一个 pid 到 /var/run/redis.pid 文件里面。
daemonize yes

# 当redis作为守护进程运行的时候,它会把 pid 默认写到 /var/run/redis.pid 文件里面,
# 但是你可以在这里自己制定它的文件位置。
pidfile /var/run/redis.pid

# 监听端口号,默认为 6379,如果你设为 0 ,redis 将不在 socket 上监听任何客户端连接。
port 6379

# TCP 监听的最大容纳数量
#
# 在高并发的环境下,你需要把这个值调高以避免客户端连接缓慢的问题。
# Linux 内核会一声不响的把这个值缩小成 /proc/sys/net/core/somaxconn 对应的值,
# 所以你要修改这两个值才能达到你的预期。
tcp-backlog 511

# 默认情况下,redis 在 server 上所有有效的网络接口上监听客户端连接。
# 你如果只想让它在一个网络接口上监听,那你就绑定一个IP或者多个IP。
#
# 示例,多个IP用空格隔开:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1

# 指定 unix socket 的路径。
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755

# 指定在一个 client 空闲多少秒之后关闭连接(0 就是不管它)
timeout 0

# tcp 心跳包。
#
# 如果设置为非零,则在与客户端缺乏通讯的时候使用 SO_KEEPALIVE 发送 tcp acks 给客户端。
# 这个之所有有用,主要由两个原因:
#
# 1) 防止死的 peers
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
# 推荐一个合理的值就是60秒
tcp-keepalive 0

# 定义日志级别。
# 可以是下面的这些值:
# debug (适用于开发或测试阶段)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (适用于生产环境)
# warning (仅仅一些重要的消息被记录)
loglevel notice

# 指定日志文件的位置
logfile ""

# 要想把日志记录到系统日志,就把它改成 yes,
# 也可以可选择性的更新其他的syslog 参数以达到你的要求
# syslog-enabled no

# 设置 syslog 的 identity。
# syslog-ident redis

# 设置 syslog 的 facility,必须是 USER 或者是 LOCAL0-LOCAL7 之间的值。
# syslog-facility local0

# 设置数据库的数目。
# 默认数据库是 DB 0,你可以在每个连接上使用 select <dbid> 命令选择一个不同的数据库,
# 但是 dbid 必须是一个介于 0 到 databasees - 1 之间的值
databases 16

################################ 快照 ################################
#
# 存 DB 到磁盘:
#
#   格式:save <间隔时间(秒)> <写入次数>
#
#   根据给定的时间间隔和写入次数将数据保存到磁盘
#
#   下面的例子的意思是:
#   900 秒内如果至少有 1 个 key 的值变化,则保存
#   300 秒内如果至少有 10 个 key 的值变化,则保存
#   60 秒内如果至少有 10000 个 key 的值变化,则保存
#  
#   注意:你可以注释掉所有的 save 行来停用保存功能。
#   也可以直接一个空字符串来实现停用:
#   save ""

save 900 1
save 300 10
save 60 10000

# 默认情况下,如果 redis 最后一次的后台保存失败,redis 将停止接受写操作,
# 这样以一种强硬的方式让用户知道数据不能正确的持久化到磁盘,
# 否则就会没人注意到灾难的发生。
#
# 如果后台保存进程重新启动工作了,redis 也将自动的允许写操作。
#
# 然而你要是安装了靠谱的监控,你可能不希望 redis 这样做,那你就改成 no 好了。
stop-writes-on-bgsave-error yes

# 是否在 dump .rdb 数据库的时候使用 LZF 压缩字符串
# 默认都设为 yes
# 如果你希望保存子进程节省点 cpu ,你就设置它为 no ,
# 不过这个数据集可能就会比较大
rdbcompression yes

# 是否校验rdb文件
rdbchecksum yes

# 设置 dump 的文件位置,redis默认的持久化文件的方式是rdb文件
dbfilename dump.rdb


#指定持久化文件的目录,例如上面的 dbfilename 只指定了文件名,
# 但是它会写入到这个目录下。这个配置项一定是个目录,而不能是文件名。
dir ./

################################# 主从复制 #################################

# 主从复制。使用 slaveof 来让一个 redis 实例成为另一个reids 实例的副本。
# 注意这个只需要在 slave 上配置。
#
# slaveof <masterip> <masterport>

# 如果 master 需要密码认证,就在这里设置
# masterauth <master-password>

# 当一个 slave 与 master 失去联系,或者复制正在进行的时候,
# slave 可能会有两种表现:
#
# 1) 如果为 yes ,slave 仍然会应答客户端请求,但返回的数据可能是过时,
#    或者数据可能是空的在第一次同步的时候
#
# 2) 如果为 no ,在你执行除了 info he salveof 之外的其他命令时,
#    slave 都将返回一个 "SYNC with master in progress" 的错误,
#
slave-serve-stale-data yes

# 你可以配置一个 slave 实体是否接受写入操作。
# 通过写入操作来存储一些短暂的数据对于一个 slave 实例来说可能是有用的,
# 因为相对从 master 重新同步数而言,据数据写入到 slave 会更容易被删除。
# 但是如果客户端因为一个错误的配置写入,也可能会导致一些问题。
#
# 从 redis 2.6 版起,默认 slaves 都是只读的。
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
# 注意:只读的 slaves 没有被设计成在 internet 上暴露给不受信任的客户端。
# 它仅仅是一个针对误用实例的一个保护层。
slave-read-only yes

# Slaves 在一个预定义的时间间隔内发送 ping 命令到 server 。
# 你可以改变这个时间间隔。默认为 10 秒。
#
# repl-ping-slave-period 10

# The following option sets the replication timeout for:
# 设置主从复制过期时间
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
# 这个值一定要比 repl-ping-slave-period 大
#
# repl-timeout 60

# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# 设置主从复制容量大小。这个 backlog 是一个用来在 slaves 被断开连接时
# 存放 slave 数据的 buffer,所以当一个 slave 想要重新连接,通常不希望全部重新同步,
# 只是部分同步就够了,仅仅传递 slave 在断开连接时丢失的这部分数据。
#
# The biggest the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
# 这个值越大,salve 可以断开连接的时间就越长。
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
# 在某些时候,master 不再连接 slaves,backlog 将被释放。
#
# A value of 0 means to never release the backlog.
# 如果设置为 0 ,意味着绝不释放 backlog 。
#
# repl-backlog-ttl 3600

# 当 master 不能正常工作的时候,Redis Sentinel 会从 slaves 中选出一个新的 master,
# 这个值越小,就越会被优先选中,但是如果是 0 , 那是意味着这个 slave 不可能被选中。
#
# 默认优先级为 100。
slave-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEES that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.

################################## 安全 ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# 设置认证密码
# requirepass foobared

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.

################################### 限制 ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# 一旦达到最大限制,redis 将关闭所有的新连接
# 并发送一个‘max number of clients reached’的错误。
#
# maxclients 10000

# 如果你设置了这个值,当缓存的数据容量达到这个值, redis 将根据你选择的
# eviction 策略来移除一些 keys。
#
# 如果 redis 不能根据策略移除 keys ,或者是策略被设置为 ‘noeviction’,
# redis 将开始响应错误给命令,如 set,lpush 等等,
# 并继续响应只读的命令,如 get
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# 最大使用内存
# maxmemory <bytes>

# 最大内存策略,你有 5 个选择。
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# volatile-lru -> 使用 LRU 算法移除包含过期设置的 key 。
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# allkeys-lru -> 根据 LRU 算法移除所有的 key 。
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
# noeviction -> 不让任何 key 过期,只是给写入操作返回一个错误
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are not suitable keys for eviction.
#
#       At the date of writing this commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction

# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log . Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

################################ LUA SCRIPTING  ###############################

# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceed the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write commands was
# already issue by the script but the user don't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000

################################ REDIS 集群  ###############################
#
# 启用或停用集群
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system does not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000

# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
#    in order to try to give an advantage to the slave with the best
#    replication offset (more data from the master processed).
#    Slaves will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the slave will not try to failover
#    at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
#   (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10

# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1

# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128

############################# Event notification ##############################

# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/keyspace-events
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#
#  The "notify-keyspace-events" takes as argument a string that is composed
#  by zero or multiple characters. The empty string means that notifications
#  are disabled at all.
#
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#
#  notify-keyspace-events Elg
#
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#
#  notify-keyspace-events Ex
#
#  By default all notifications are disabled because most users don't need
#  this feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""

############################### ADVANCED CONFIG ###############################

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happens to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients
# slave  -> slave clients and MONITOR clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform accordingly to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes

0 0
原创粉丝点击