Redis内存管理

来源:互联网 发布:网络语言大二班 编辑:程序博客网 时间:2024/06/05 16:23

很久没有听音乐了,找了一个只有一个左声道的耳机,听了罗百吉的两首舞曲,每次听,都让人心里澎湃一阵子。。。。


一、缓存
缓存一般使用场景(读多、更新少)。之所以考虑缓存就是避免直接读磁盘。缓存实现一般是基于内存。
二、数据一致性问题
缓存数据一般不持久化处理(持久化就当做NoSql,DB来使用了),一旦使用持久化,就需要考虑保证数据一致性问题。
三、Redis
1、单实例Redis内存大小(一般最大10--20G,最好是10G以下)。
2、分片集群,数据量大,就需要引入分片。
四、选型要求
1、HA要求(Redis 实例主从同步实现)
2、水平扩展,根据业务容量可动态增减实例数。
3、数据平滑迁移(迁移对客户端透明)
4、可视化监控(方面管理)

基于上述考虑,建议选用Redis 开源实现Codis。官方:https://github.com/wandoulabs/codis 

虽然之前也使用过Redis,时间久了难免一些地方生疏了起来,今将看到的官网内存管理的部分记录如下:

http://redis.io/topics/memory-optimization

http://redis.io/topics/lru-cache

1、建议我们生产环境开启       maxmemory 配置。

(如果不,则可能引起吃完内存而当机的风险。是,则根据不同策略回收内存,回收不到,应用程序抛错处理可。)

Memory allocation

To store user keys, Redis allocates at most as much memory as the maxmemory setting enables (however there are small extra allocations possible).

The exact value can be set in the configuration file or set later via CONFIG SET (see Using memory as an LRU cache for more info). There are a few things that should be noted about how Redis manages memory:

  • Redis will not always free up (return) memory to the OS when keys are removed. This is not something special about Redis, but it is how most malloc() implementations work. For example if you fill an instance with 5GB worth of data, and then remove the equivalent of 2GB of data, the Resident Set Size (also known as the RSS, which is the number of memory pages consumed by the process) will probably still be around 5GB, even if Redis will claim that the user memory is around 3GB. This happens because the underlying allocator can't easily release the memory. For example often most of the removed keys were allocated in the same pages as the other keys that still exist.
  • The previous point means that you need to provision memory based on your peak memory usage. If your workload from time to time requires 10GB, even if most of the times 5GB could do, you need to provision for 10GB.
  • However allocators are smart and are able to reuse free chunks of memory, so after you freed 2GB of your 5GB data set, when you start adding more keys again, you'll see the RSS (Resident Set Size) to stay steady and don't grow more, as you add up to 2GB of additional keys. The allocator is basically trying to reuse the 2GB of memory previously (logically) freed.
  • Because of all this, the fragmentation ratio is not reliable when you had a memory usage that at peak is much larger than the currently used memory. The fragmentation is calculated as the amount of memory currently in use (as the sum of all the allocations performed by Redis) divided by the physical memory actually used (the RSS value). Because the RSS reflects the peak memory, when the (virtually) used memory is low since a lot of keys / values were freed, but the RSS is high, the ratio mem_used / RSS will be very high.

If maxmemory is not set Redis will keep allocating memory as it finds fit and thus it can (gradually) eat up all your free memory. Therefore it is generally advisable to configure some limit. You may also want to set maxmemory-policy tonoeviction (which is not the default value in some older versions of Redis).

It makes Redis return an out of memory error for write commands if and when it reaches the limit - which in turn may result in errors in the application but will not render the whole machine dead because of memory starvation.





0 0
原创粉丝点击