Redis配置文件解析

来源:互联网 发布:java大学基础教程 pdf 编辑:程序博客网 时间:2024/06/03 15:55

下面是最新的(3.2.9)redis配置文件

# Redis configuration file example.## 当你需要为某个配置项指定内存大小的时候,必须要带上单位,# 通常的格式就是 1k 5gb 4m 等:## 1k => 1000 bytes# 1kb => 1024 bytes# 1m => 1000000 bytes# 1mb => 1024*1024 bytes# 1g => 1000000000 bytes# 1gb => 1024*1024*1024 bytes##单位不区分大小写######################### INCLUDES ############################ 假如说你有一个可用于所有的 redis server 的标准配置模板,# 但针对某些 server 又需要一些个性化的设置,# 你可以使用 include 来包含一些其他的配置文件,这对你来说是非常有用的。## 但是要注意哦,include 是不能被 config rewrite 命令改写的# 由于 redis 总是以最后的加工线作为一个配置指令值,# 所以你最好是把 include 放在这个文件的最前面,# 以避免在运行时覆盖配置的改变,相反,你就把它放在后面。## include /path/to/local.conf# include /path/to/other.conf###################### NETWORK ########################### 默认情况下,redis 在 server 上所有有效的网络接口上监听客户端连接。# 你如果只想让它在一个网络接口上监听,那你就绑定一个IP或者多个IP。## Examples:## bind 192.168.1.100 10.0.0.1# bind 127.0.0.1 ::1## 默认只有本机可以连接,如果需要其他机器连接,可以将下面配置注释掉~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~bind 127.0.0.1#安全模式 yes开启 no 关闭#开启情况下不管是否配置了密码、bind,都仅本机(127.0.0.1)可以访问protected-mode yes#监听端口号,默认为 6379,如果你设为 0 ,redis 将不在 socket 上监听任何客户端连接。port 6379#此参数确定了TCP连接中已完成队列(完成三次握手之后)的长度,#当然此值必须不大于Linux系统定义的/proc/sys/net/core/somaxconn值,#默认是511,而Linux的默认参数值是128。当系统并发量大并且客户端速度缓慢的时候,#可以将这二个参数一起参考设定。该内核参数默认值一般是128,#对于负载很大的服务程序来说大大的不够。一般会将它修改为2048或者更大。#在/etc/sysctl.conf中添加:net.core.somaxconn = 2048,然后在终端中执行sysctl -p。tcp-backlog 511#配置unix socket来让redis支持监听本地连接。# unixsocket /tmp/redis.sock#配置unix socket使用文件的权限# unixsocketperm 700# Close the connection after a client is idle for N seconds (0 to disable)# 指定在一个 client 空闲多少秒之后关闭连接(0 就是不关闭)timeout 0#tcp keepalive参数。如果设置不为0,就使用配置tcp的SO_KEEPALIVE值,#使用keepalive有两个好处:检测挂掉的对端。#降低中间设备出问题而导致网络看似连接却已经与对端端口的问题。#在Linux内核中,设置了keepalive,redis会定时给对端发送ack。#检测到对端关闭需要两倍的设置值。#推荐一个合理的值就是300秒tcp-keepalive 300###################### GENERAL ##########################redis默认不是后台运行的,如果需要后台运行可以将其改为yes#后台运行的pid会写入/var/run/redis.piddaemonize yes# If you run Redis from upstart or systemd, Redis can interact with your# supervision tree. Options:#   supervised no      - no supervision interaction#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET#   supervised auto    - detect upstart or systemd method based on#                        UPSTART_JOB or NOTIFY_SOCKET environment variables# Note: these supervision methods only signal "process is ready."#       They do not enable continuous liveness pings back to your supervisor.supervised no#指定redis的进程文件pidfile /var/run/redis_6379.pid#指定了服务端日志的级别。级别包括:#debug(很多信息,方便开发、测试),#verbose(许多有用的信息,但是没有debug级别信息多),#notice(适当的日志级别,适合生产环境),#warn(只有非常重要的信息)loglevel notice#指定了记录日志的文件。空字符串的话,日志会打印到标准输出设备。#后台运行的redis标准输出是/dev/null。logfile ""#是否打开记录syslog功能# syslog-enabled no#syslog的标识符。# syslog-ident redis#日志的来源、设备LOCAL0-LOCAL7# syslog-facility local0#数据库的数量,默认使用的数据库是DB 0。可以通过”SELECT “命令选择一个dbdatabases 16###################### SNAPSHOTTING  ######################## 快照配置# 注释掉“save”这一行配置项就可以让保存数据库功能失效# 设置sedis进行数据库镜像的频率。# 900秒(15分钟)内至少1个key值改变(则进行数据库保存--持久化) # 300秒(5分钟)内至少10个key值改变(则进行数据库保存--持久化) # 60秒(1分钟)内至少10000个key值改变(则进行数据库保存--持久化)# save ""save 900 1save 300 10save 60 10000#当RDB持久化出现错误后,是否依然进行继续进行工作,#yes:不能进行工作,no:可以继续进行工作,#可以通过info中的rdb_last_bgsave_status了解RDB持久化是否有错误stop-writes-on-bgsave-error yes#对于存储到磁盘中的快照,可以设置是否进行压缩存储。如果是的话,#redis会采用LZF算法进行压缩。如果你不想消耗CPU来进行压缩的话,可以设置为关闭此功能rdbcompression yes#在存储快照后,还可以让redis使用CRC64算法来进行数据校验,#但是这样做会增加大约10%的性能消耗,如果希望获取到最大的性能提升,可以关闭此功能rdbchecksum yes# The filename where to dump the DB#rdb文件的名称dbfilename dump.rdb#数据目录,数据库的写入会在这个目录。rdb、aof文件也会写在这个目录dir ./######################## REPLICATION ###########################slave复制对应的master。# slaveof <masterip> <masterport>#如果master设置了requirepass,那么slave要连上master,需要有master的密码才行。#masterauth就是用来配置master的密码,这样可以在连上master后进行认证。# masterauth <master-password>#当从库同主机失去连接或者复制正在进行,从机库有两种运行方式:#1) 如果slave-serve-stale-data设置为yes(默认设置),从库会继续响应客户端的请求。#2) 如果slave-serve-stale-data设置为no,除去INFO和SLAVOF命令之外的任何请求#都会返回一个错误”SYNC with master in progress”。slave-serve-stale-data yes#作为从服务器,默认情况下是只读的(yes),可以修改成NO,用于写(不建议)。slave-read-only yes#是否使用socket方式复制数据。目前redis复制提供两种方式,disk和socket。#如果新的slave连上来或者重连的slave无法部分同步,就会执行全量同步,master会生成rdb文件。#有2种方式:disk方式是master创建一个新的进程把rdb文件保存到磁盘,再把磁盘上的rdb文件传递给slave。#socket是master创建一个新的进程,直接把rdb文件以socket的方式发给slave。#disk方式的时候,当一个rdb保存的过程中,多个slave都能共享这个rdb文件。#socket的方式就的一个个slave顺序复制。在磁盘速度缓慢,网速快的情况下推荐用socket方式。repl-diskless-sync no#diskless复制的延迟时间,防止设置为0。#一旦复制开始,节点不会再接收新slave的复制请求直到下一个rdb传输。#所以最好等待一段时间,等更多的slave连上来。repl-diskless-sync-delay 5#slave根据指定的时间间隔向服务器发送ping请求。#时间间隔可以通过 repl_ping_slave_period 来设置,默认10秒# repl-ping-slave-period 10#复制连接超时时间。master和slave都有超时时间的设置。#master检测到slave上次发送的时间超过repl-timeout,即认为slave离线,清除该slave信息。#slave检测到上次和master交互的时间超过repl-timeout,则认为master离线。#需要注意的是repl-timeout需要设置一个比repl-ping-slave-period更大的值,#不然会经常检测到超时。# repl-timeout 60#是否禁止复制tcp链接的tcp nodelay参数,可传递yes或者no。默认是no#如果master设置了yes来禁止tcp nodelay设置,在把数据复制给slave的时候,#会减少包的数量和更小的网络带宽。但是这也可能带来数据的延迟。#默认我们推荐更小的延迟,但是在数据量传输很大的场景下,建议选择yes。repl-disable-tcp-nodelay no#复制缓冲区大小,这是一个环形复制缓冲区,用来保存最新复制的命令。这样在slave离线的时候,#不需要完全复制master的数据,如果可以执行部分同步,只需要把缓冲区的部分数据复制给slave,#就能恢复正常复制状态。缓冲区的大小越大,slave离线的时间可以更长,#复制缓冲区只有在有slave连接的时候才分配内存。#没有slave的一段时间,内存会被释放出来,默认1m。# repl-backlog-size 1mb#master没有slave一段时间会释放复制缓冲区的内存,#repl-backlog-ttl用来设置该时间长度。单位为秒。# repl-backlog-ttl 3600#当master不可用,Sentinel会根据slave的优先级选举一个master。#最低的优先级的slave,当选master。而配置成0,永远不会被选举slave-priority 100#redis提供了可以让master停止写入的方式,如果配置了min-slaves-to-write,#健康的slave的个数小于N,mater就禁止写入。#master最少得有多少个健康的slave存活才能执行写命令。#这个配置虽然不能保证N个slave都一定能接收到master的写操作,#但是能避免没有足够健康的slave的时候,master不能写入来避免数据丢失。#设置为0是关闭该功能。# min-slaves-to-write 3#延迟小于min-slaves-max-lag秒的slave才认为是健康的slave。# min-slaves-max-lag 10#slave向master提供自己的IP和端口## slave-announce-ip 5.5.5.5# slave-announce-port 1234####################### SECURITY ##########################requirepass配置可以让用户使用AUTH命令来认证密码,才能使用其他命令。#使用requirepass的时候需要注意,因为redis太快了,每秒可以认证15w次密码,#简单的密码很容易被攻破,所以最好使用一个更复杂的密码。# requirepass foobared#把危险的命令给修改成其他名称。比如CONFIG命令可以重命名为一个很难被猜到的命令,#这样用户不能使用,而内部工具还能接着使用。#设置成一个空的值,可以禁止一个命令## rename-command CONFIG ""## 别修改登录命令的名称,那样会使aof和slave不起作用########################## LIMITS #############################设置能连上redis的最大客户端连接数量。默认是10000个客户端连接。#由于redis不区分连接是客户端连接还是内部打开文件或者和slave连接等,#所以maxclients最小建议设置到32。如果超过了maxclients,#redis会给新的连接发送’max number of clients reached’,并关闭连接。## maxclients 10000#redis配置的最大内存容量。当内存满了,需要配合maxmemory-policy策略进行处理。#注意slave的输出缓冲区是不计算在maxmemory内的。#所以为了防止主机内存使用完,建议设置的maxmemory需要更小一些。## maxmemory <bytes>#内存容量超过maxmemory后的处理策略。#volatile-lru:利用LRU算法移除设置过过期时间的key。#volatile-random:随机移除设置过过期时间的key。#volatile-ttl:移除即将过期的key,根据最近过期时间来删除(辅以TTL)#allkeys-lru:利用LRU算法移除任何key。#allkeys-random:随机移除任何key。#noeviction:不移除任何key,只是返回一个写错误。## maxmemory-policy noeviction#lru检测的样本数。使用lru或者ttl淘汰算法,从需要淘汰的列表中随机选择sample个key,选出闲置时间最长的key移除。## maxmemory-samples 5###################### APPEND ONLY MODE #########################是否开启aofappendonly no#aof文件名appendfilename "appendonly.aof"#aof持久化策略的配置#no表示不执行fsync,由操作系统保证数据同步到磁盘,速度最快。#always表示每次写入都执行fsync,以保证数据同步到磁盘。#everysec表示每秒执行一次fsync,可能会导致丢失这1s数据。# appendfsync no#在aof重写或者写入rdb文件的时候,会执行大量IO,此时对于everysec和always的aof模式来说,#执行fsync会造成阻塞过长时间,no-appendfsync-on-rewrite字段设置为默认设置为no。#如果对延迟要求很高的应用,这个字段可以设置为yes,否则还是设置为no,#这样对持久化特性来说这是更安全的选择。设置为yes表示rewrite期间对新写操作不fsync,#暂时存在内存中,等rewrite完成后再写入,#默认为no,建议yes。Linux的默认fsync策略是30秒。可能丢失30秒数据。no-appendfsync-on-rewrite no#aof自动重写配置。当目前aof文件大小超过上一次重写的aof文件大小的百分之多少进行重写,#即当aof文件增长到一定大小的时候Redis能够调用bgrewriteaof对日志文件进行重写。#当前AOF文件大小是上次日志重写得到AOF文件大小的二倍(设置为100)时,#自动启动新的日志重写过程。auto-aof-rewrite-percentage 100#设置允许重写的最小aof文件大小,避免了达到约定百分比但尺寸仍然很小的情况还要重写auto-aof-rewrite-min-size 64mb#aof文件可能在尾部是不完整的,当redis启动的时候,aof文件的数据被载入内存。#重启可能发生在redis所在的主机操作系统宕机后,尤其在ext4文件系统没有加上data=ordered选项#(redis宕机或者异常终止不会造成尾部不完整现象。)出现这种现象,#可以选择让redis退出,或者导入尽可能多的数据。如果选择的是yes,#当截断的aof文件被导入的时候,会自动发布一个log给客户端然后load。#如果是no,用户必须手动redis-check-aof修复AOF文件才可以。aof-load-truncated yes########################## SLOW LOG #############################slog log是用来记录redis运行中执行比较慢的命令耗时。#当命令的执行超过了指定时间,就记录在slow log中,slog log保存在内存中,所以没有IO操作。#执行时间比slowlog-log-slower-than大的请求记录到slowlog里面,单位是微秒,#所以1000000就是1秒。注意,负数时间会禁用慢查询日志,而0则会强制记录所有命令。slowlog-log-slower-than 10000#慢查询日志长度。当一个新的命令被写进日志的时候,最老的那个记录会被删掉。这个长度没有限制。只要有足够的内存就行。你可以通过 SLOWLOG RESET 来释放内存。slowlog-max-len 128######################## LUA SCRIPTING ######################### 如果达到最大时间限制(毫秒),redis会记个log,然后返回error。#当一个脚本超过了最大时限。只有SCRIPT KILL和SHUTDOWN NOSAVE可以用。#第一个可以杀没有调write命令的东西。要是已经调用了write,只能用第二个命令杀。lua-time-limit 5000######################## LATENCY MONITOR  ######################延迟监控功能是用来监控redis中执行比较缓慢的一些操作,#用LATENCY打印redis实例在跑命令时的耗时图表。#只记录大于等于下边设置的值的操作。#0的话,就是关闭监视。#默认延迟监控功能是关闭的,如果你需要打开,也可以通过CONFIG SET命令动态设置。latency-monitor-threshold 0####################### REDIS CLUSTER  ######################### ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however# in order to mark it as "mature" we need to wait for a non trivial percentage# of users to deploy it in production.# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++## Normal Redis instances can't be part of a Redis Cluster; only nodes that are# started as cluster nodes can. In order to start a Redis instance as a# cluster node enable the cluster support uncommenting the following:## cluster-enabled yes# Every cluster node has a cluster configuration file. This file is not# intended to be edited by hand. It is created and updated by Redis nodes.# Every Redis Cluster node requires a different cluster configuration file.# Make sure that instances running in the same system do not have# overlapping cluster configuration file names.## cluster-config-file nodes-6379.conf# Cluster node timeout is the amount of milliseconds a node must be unreachable# for it to be considered in failure state.# Most other internal time limits are multiple of the node timeout.## cluster-node-timeout 15000# A slave of a failing master will avoid to start a failover if its data# looks too old.## There is no simple way for a slave to actually have a exact measure of# its "data age", so the following two checks are performed:## 1) If there are multiple slaves able to failover, they exchange messages#    in order to try to give an advantage to the slave with the best#    replication offset (more data from the master processed).#    Slaves will try to get their rank by offset, and apply to the start#    of the failover a delay proportional to their rank.## 2) Every single slave computes the time of the last interaction with#    its master. This can be the last ping or command received (if the master#    is still in the "connected" state), or the time that elapsed since the#    disconnection with the master (if the replication link is currently down).#    If the last interaction is too old, the slave will not try to failover#    at all.## The point "2" can be tuned by user. Specifically a slave will not perform# the failover if, since the last interaction with the master, the time# elapsed is greater than:##   (node-timeout * slave-validity-factor) + repl-ping-slave-period## So for example if node-timeout is 30 seconds, and the slave-validity-factor# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the# slave will not try to failover if it was not able to talk with the master# for longer than 310 seconds.## A large slave-validity-factor may allow slaves with too old data to failover# a master, while a too small value may prevent the cluster from being able to# elect a slave at all.## For maximum availability, it is possible to set the slave-validity-factor# to a value of 0, which means, that slaves will always try to failover the# master regardless of the last time they interacted with the master.# (However they'll always try to apply a delay proportional to their# offset rank).## Zero is the only value able to guarantee that when all the partitions heal# the cluster will always be able to continue.## cluster-slave-validity-factor 10# Cluster slaves are able to migrate to orphaned masters, that are masters# that are left without working slaves. This improves the cluster ability# to resist to failures as otherwise an orphaned master can't be failed over# in case of failure if it has no working slaves.## Slaves migrate to orphaned masters only if there are still at least a# given number of other working slaves for their old master. This number# is the "migration barrier". A migration barrier of 1 means that a slave# will migrate only if there is at least 1 other working slave for its master# and so forth. It usually reflects the number of slaves you want for every# master in your cluster.## Default is 1 (slaves migrate only if their masters remain with at least# one slave). To disable migration just set it to a very large value.# A value of 0 can be set but is useful only for debugging and dangerous# in production.## cluster-migration-barrier 1# By default Redis Cluster nodes stop accepting queries if they detect there# is at least an hash slot uncovered (no available node is serving it).# This way if the cluster is partially down (for example a range of hash slots# are no longer covered) all the cluster becomes, eventually, unavailable.# It automatically returns available as soon as all the slots are covered again.## However sometimes you want the subset of the cluster which is working,# to continue to accept queries for the part of the key space that is still# covered. In order to do so, just set the cluster-require-full-coverage# option to no.## cluster-require-full-coverage yes# In order to setup your cluster make sure to read the documentation# available at http://redis.io web site.####################### LATENCY MONITOR ####################### The Redis latency monitoring subsystem samples different operations# at runtime in order to collect data related to possible sources of# latency of a Redis instance.## Via the LATENCY command this information is available to the user that can# print graphs and obtain reports.## The system only logs operations that were performed in a time equal or# greater than the amount of milliseconds specified via the# latency-monitor-threshold configuration directive. When its value is set# to zero, the latency monitor is turned off.## By default latency monitoring is disabled since it is mostly not needed# if you don't have latency issues, and collecting data has a performance# impact, that while very small, can be measured under big load. Latency# monitoring can easily be enabled at runtime using the command# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.latency-monitor-threshold 0################### EVENT NOTIFICATION  ####################### Redis can notify Pub/Sub clients about events happening in the key space.# This feature is documented at http://redis.io/topics/notifications## For instance if keyspace events notification is enabled, and a client# performs a DEL operation on key "foo" stored in the Database 0, two# messages will be published via Pub/Sub:## PUBLISH __keyspace@0__:foo del# PUBLISH __keyevent@0__:del foo## It is possible to select the events that Redis will notify among a set# of classes. Every class is identified by a single character:##  K     Keyspace events, published with __keyspace@<db>__ prefix.#  E     Keyevent events, published with __keyevent@<db>__ prefix.#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...#  $     String commands#  l     List commands#  s     Set commands#  h     Hash commands#  z     Sorted set commands#  x     Expired events (events generated every time a key expires)#  e     Evicted events (events generated when a key is evicted for maxmemory)#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.##  The "notify-keyspace-events" takes as argument a string that is composed#  of zero or multiple characters. The empty string means that notifications#  are disabled.##  Example: to enable list and generic events, from the point of view of the#           event name, use:##  notify-keyspace-events Elg##  Example 2: to get the stream of the expired keys subscribing to channel#             name __keyevent@0__:expired use:##  notify-keyspace-events Ex##  By default all notifications are disabled because most users don't need#  this feature and the feature has some overhead. Note that if you don't#  specify at least one of K or E, no events will be delivered.notify-keyspace-events ""##################### ADVANCED CONFIG  ###################### Hashes are encoded using a memory efficient data structure when they have a# small number of entries, and the biggest entry does not exceed a given# threshold. These thresholds can be configured using the following directives.hash-max-ziplist-entries 512hash-max-ziplist-value 64# Lists are also encoded in a special way to save a lot of space.# The number of entries allowed per internal list node can be specified# as a fixed maximum size or a maximum number of elements.# For a fixed maximum size, use -5 through -1, meaning:# -5: max size: 64 Kb  <-- not recommended for normal workloads# -4: max size: 32 Kb  <-- not recommended# -3: max size: 16 Kb  <-- probably not recommended# -2: max size: 8 Kb   <-- good# -1: max size: 4 Kb   <-- good# Positive numbers mean store up to _exactly_ that number of elements# per list node.# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),# but if your use case is unique, adjust the settings as necessary.list-max-ziplist-size -2# Lists may also be compressed.# Compress depth is the number of quicklist ziplist nodes from *each* side of# the list to *exclude* from compression.  The head and tail of the list# are always uncompressed for fast push/pop operations.  Settings are:# 0: disable all list compression# 1: depth 1 means "don't start compressing until after 1 node into the list,#    going from either the head or tail"#    So: [head]->node->node->...->node->[tail]#    [head], [tail] will always be uncompressed; inner nodes will compress.# 2: [head]->[next]->node->node->...->node->[prev]->[tail]#    2 here means: don't compress head or head->next or tail->prev or tail,#    but compress all nodes between them.# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]# etc.list-compress-depth 0# Sets have a special encoding in just one case: when a set is composed# of just strings that happen to be integers in radix 10 in the range# of 64 bit signed integers.# The following configuration setting sets the limit in the size of the# set in order to use this special memory saving encoding.set-max-intset-entries 512# Similarly to hashes and lists, sorted sets are also specially encoded in# order to save a lot of space. This encoding is only used when the length and# elements of a sorted set are below the following limits:zset-max-ziplist-entries 128zset-max-ziplist-value 64# HyperLogLog sparse representation bytes limit. The limit includes the# 16 bytes header. When an HyperLogLog using the sparse representation crosses# this limit, it is converted into the dense representation.## A value greater than 16000 is totally useless, since at that point the# dense representation is more memory efficient.## The suggested value is ~ 3000 in order to have the benefits of# the space efficient encoding without slowing down too much PFADD,# which is O(N) with the sparse encoding. The value can be raised to# ~ 10000 when CPU is not a concern, but space is, and the data set is# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.hll-sparse-max-bytes 3000# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in# order to help rehashing the main Redis hash table (the one mapping top-level# keys to values). The hash table implementation Redis uses (see dict.c)# performs a lazy rehashing: the more operation you run into a hash table# that is rehashing, the more rehashing "steps" are performed, so if the# server is idle the rehashing is never complete and some more memory is used# by the hash table.## The default is to use this millisecond 10 times every second in order to# actively rehash the main dictionaries, freeing memory when possible.## If unsure:# use "activerehashing no" if you have hard latency requirements and it is# not a good thing in your environment that Redis can reply from time to time# to queries with 2 milliseconds delay.## use "activerehashing yes" if you don't have such hard requirements but# want to free memory asap when possible.activerehashing yes# The client output buffer limits can be used to force disconnection of clients# that are not reading data from the server fast enough for some reason (a# common reason is that a Pub/Sub client can't consume messages as fast as the# publisher can produce them).## The limit can be set differently for the three different classes of clients:## normal -> normal clients including MONITOR clients# slave  -> slave clients# pubsub -> clients subscribed to at least one pubsub channel or pattern## The syntax of every client-output-buffer-limit directive is the following:## client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>## A client is immediately disconnected once the hard limit is reached, or if# the soft limit is reached and remains reached for the specified number of# seconds (continuously).# So for instance if the hard limit is 32 megabytes and the soft limit is# 16 megabytes / 10 seconds, the client will get disconnected immediately# if the size of the output buffers reach 32 megabytes, but will also get# disconnected if the client reaches 16 megabytes and continuously overcomes# the limit for 10 seconds.## By default normal clients are not limited because they don't receive data# without asking (in a push way), but just after a request, so only# asynchronous clients may create a scenario where data is requested faster# than it can read.## Instead there is a default limit for pubsub and slave clients, since# subscribers and slaves receive data in a push fashion.## Both the hard or the soft limit can be disabled by setting them to zero.client-output-buffer-limit normal 0 0 0client-output-buffer-limit slave 256mb 64mb 60client-output-buffer-limit pubsub 32mb 8mb 60# Redis calls an internal function to perform many background tasks, like# closing connections of clients in timeout, purging expired keys that are# never requested, and so forth.## Not all tasks are performed with the same frequency, but Redis checks for# tasks to perform according to the specified "hz" value.## By default "hz" is set to 10. Raising the value will use more CPU when# Redis is idle, but at the same time will make Redis more responsive when# there are many keys expiring at the same time, and timeouts may be# handled with more precision.## The range is between 1 and 500, however a value over 100 is usually not# a good idea. Most users should use the default of 10 and raise this up to# 100 only in environments where very low latency is required.hz 10# When a child rewrites the AOF file, if the following option is enabled# the file will be fsync-ed every 32 MB of data generated. This is useful# in order to commit the file to the disk more incrementally and avoid# big latency spikes.aof-rewrite-incremental-fsync yes
原创粉丝点击