redis性能测试

来源:互联网 发布:mac 网盘 知乎 编辑:程序博客网 时间:2024/05/17 03:44

有时间对Redis的一个测试来看,特别是经过了网络,此时,会傻傻分不清楚是Redis本来性能就差,还是网络不好(PHP调用Redis的机器和Redis分离或不在同一网段),这个工具可以直接在Redis上测试Redis,或是在PHP上测试Redis,这样有一个粗粒度的判断和把握。

Redis-benchmark是官方自带的Redis性能测试工具,可以有效的测试Redis服务的性能。

Redis-benchmark

使用说明如下:
首先进入redis-benchmark这个工具的所在目录

Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests]> [-k <boolean>]-h <hostname>      Server hostname (default 127.0.0.1)-p <port>          Server port (default 6379)-s <socket>        Server socket (overrides host and port)-c <clients>       Number of parallel connections (default 50)-n <requests>      Total number of requests (default 10000)-d <size>          Data size of SET/GET value in bytes (default 2)-k <boolean>       1=keep alive 0=reconnect (default 1)-r <keyspacelen>   Use random keys for SET/GET/INCR, random values for SADD  Using this option the benchmark will get/set keys  in the form mykey_rand:000000012456 instead of constant  keys, the <keyspacelen> argument determines the max  number of values for the random number. For instance  if set to 10 only rand:000000000000 - rand:000000000009  range will be allowed.-P <numreq>        Pipeline <numreq> requests. Default 1 (no pipeline).-q                 Quiet. Just show query/sec values--csv              Output in CSV format-l                 Loop. Run the tests forever-t <tests>         Only run the comma-separated list of tests. The test                    names are the same as the ones produced as output.-I                 Idle mode. Just open N idle connections and wait.

测试命令事例:

1、redis-benchmark -h 192.168.1.201 -p 6379 -c 100 -n 100000
100个并发连接,100000个请求,检测host为localhost 端口为6379的redis服务器性能

2、redis-benchmark -h 192.168.1.201 -p 6379 -q -d 100
测试存取大小为100字节的数据包的性能

3、redis-benchmark -t set,lpush -n 100000 -q
只测试某些操作的性能

4、redis-benchmark -n 100000 -q script load “redis.call(‘set’,’foo’,’bar’)”
只测试某些数值存取的性能

测试结果分析:

  10000 requests completed in 0.30 seconds  100 parallel clients  3 bytes payload  keep alive: 10.11% <= 1 milliseconds86.00% <= 2 milliseconds90.12% <= 3 milliseconds96.68% <= 4 milliseconds99.27% <= 5 milliseconds99.54% <= 6 milliseconds99.69% <= 7 milliseconds99.78% <= 8 milliseconds99.89% <= 9 milliseconds100.00% <= 9 milliseconds33222.59 requests per second====== PING_BULK ======  10000 requests completed in 0.27 seconds  100 parallel clients  3 bytes payload  keep alive: 10.93% <= 1 milliseconds97.66% <= 2 milliseconds100.00% <= 2 milliseconds37174.72 requests per second====== SET ======  10000 requests completed in 0.32 seconds  100 parallel clients  3 bytes payload  keep alive: 10.22% <= 1 milliseconds91.68% <= 2 milliseconds97.78% <= 3 milliseconds98.80% <= 4 milliseconds99.38% <= 5 milliseconds99.61% <= 6 milliseconds99.72% <= 7 milliseconds99.83% <= 8 milliseconds99.94% <= 9 milliseconds100.00% <= 9 milliseconds30959.75 requests per second====== GET ======  10000 requests completed in 0.28 seconds  100 parallel clients  3 bytes payload  keep alive: 10.55% <= 1 milliseconds98.86% <= 2 milliseconds100.00% <= 2 milliseconds35971.22 requests per second====== INCR ======  10000 requests completed in 0.14 seconds  100 parallel clients  3 bytes payload  keep alive: 195.61% <= 1 milliseconds100.00% <= 1 milliseconds69444.45 requests per second====== LPUSH ======  10000 requests completed in 0.21 seconds  100 parallel clients  3 bytes payload  keep alive: 118.33% <= 1 milliseconds100.00% <= 1 milliseconds48309.18 requests per second====== LPOP ======  10000 requests completed in 0.23 seconds  100 parallel clients  3 bytes payload  keep alive: 10.29% <= 1 milliseconds99.76% <= 2 milliseconds100.00% <= 2 milliseconds44052.86 requests per second====== SADD ======  10000 requests completed in 0.22 seconds  100 parallel clients  3 bytes payload  keep alive: 12.37% <= 1 milliseconds99.81% <= 2 milliseconds100.00% <= 2 milliseconds44444.45 requests per second====== SPOP ======  10000 requests completed in 0.22 seconds  100 parallel clients  3 bytes payload  keep alive: 14.27% <= 1 milliseconds99.84% <= 2 milliseconds100.00% <= 2 milliseconds44642.86 requests per second====== LPUSH (needed to benchmark LRANGE) ======  10000 requests completed in 0.22 seconds  100 parallel clients  3 bytes payload  keep alive: 112.35% <= 1 milliseconds99.62% <= 2 milliseconds100.00% <= 2 milliseconds46082.95 requests per second====== LRANGE_100 (first 100 elements) ======  10000 requests completed in 0.48 seconds  100 parallel clients  3 bytes payload  keep alive: 10.01% <= 1 milliseconds3.27% <= 2 milliseconds98.71% <= 3 milliseconds99.93% <= 4 milliseconds100.00% <= 4 milliseconds20964.36 requests per second====== LRANGE_300 (first 300 elements) ======  10000 requests completed in 1.26 seconds  100 parallel clients  3 bytes payload  keep alive: 10.01% <= 2 milliseconds0.14% <= 3 milliseconds0.90% <= 4 milliseconds7.03% <= 5 milliseconds31.68% <= 6 milliseconds78.93% <= 7 milliseconds98.88% <= 8 milliseconds99.56% <= 9 milliseconds99.72% <= 10 milliseconds99.95% <= 11 milliseconds100.00% <= 11 milliseconds7961.78 requests per second====== LRANGE_500 (first 450 elements) ======  10000 requests completed in 1.82 seconds  100 parallel clients  3 bytes payload  keep alive: 10.01% <= 2 milliseconds0.06% <= 3 milliseconds0.14% <= 4 milliseconds0.30% <= 5 milliseconds0.99% <= 6 milliseconds2.91% <= 7 milliseconds8.11% <= 8 milliseconds43.15% <= 9 milliseconds88.38% <= 10 milliseconds97.25% <= 11 milliseconds98.61% <= 12 milliseconds99.26% <= 13 milliseconds99.30% <= 14 milliseconds99.44% <= 15 milliseconds99.48% <= 16 milliseconds99.64% <= 17 milliseconds99.85% <= 18 milliseconds99.92% <= 19 milliseconds99.95% <= 20 milliseconds99.96% <= 21 milliseconds99.97% <= 22 milliseconds100.00% <= 23 milliseconds5491.49 requests per second====== LRANGE_600 (first 600 elements) ======  10000 requests completed in 2.29 seconds  100 parallel clients  3 bytes payload  keep alive: 10.01% <= 2 milliseconds0.05% <= 3 milliseconds0.10% <= 4 milliseconds0.19% <= 5 milliseconds0.34% <= 6 milliseconds0.46% <= 7 milliseconds0.58% <= 8 milliseconds4.46% <= 9 milliseconds21.80% <= 10 milliseconds40.48% <= 11 milliseconds60.14% <= 12 milliseconds79.81% <= 13 milliseconds93.77% <= 14 milliseconds97.14% <= 15 milliseconds98.67% <= 16 milliseconds99.08% <= 17 milliseconds99.30% <= 18 milliseconds99.41% <= 19 milliseconds99.52% <= 20 milliseconds99.61% <= 21 milliseconds99.79% <= 22 milliseconds99.88% <= 23 milliseconds99.89% <= 24 milliseconds99.95% <= 26 milliseconds99.96% <= 27 milliseconds99.97% <= 28 milliseconds99.98% <= 29 milliseconds100.00% <= 29 milliseconds4359.20 requests per second====== MSET (10 keys) ======  10000 requests completed in 0.37 seconds  100 parallel clients  3 bytes payload  keep alive: 10.01% <= 1 milliseconds2.00% <= 2 milliseconds18.41% <= 3 milliseconds88.55% <= 4 milliseconds96.09% <= 5 milliseconds99.50% <= 6 milliseconds99.65% <= 7 milliseconds99.75% <= 8 milliseconds99.77% <= 9 milliseconds99.78% <= 11 milliseconds99.79% <= 12 milliseconds99.80% <= 13 milliseconds99.81% <= 15 milliseconds99.82% <= 16 milliseconds99.83% <= 17 milliseconds99.84% <= 19 milliseconds99.85% <= 21 milliseconds99.86% <= 23 milliseconds99.87% <= 24 milliseconds99.88% <= 25 milliseconds99.89% <= 27 milliseconds99.90% <= 28 milliseconds99.91% <= 30 milliseconds99.92% <= 32 milliseconds99.93% <= 34 milliseconds99.95% <= 35 milliseconds99.96% <= 36 milliseconds99.97% <= 37 milliseconds99.98% <= 39 milliseconds99.99% <= 41 milliseconds100.00% <= 41 milliseconds27173.91 requests per second

hgetall性能问题

如果redis缓慢多是hgetall导致的当然其他的情况也有…

在没关注这个函数之前,一直用的Memcache的数据存储方式,但是自从更换了redis之后,对于一个hash的数据存与取 对于Memcache方便甚多,但是问题来了,一个hash的列表如果量不大的情况,用hGetAll函数几乎看不出问题,一旦这个列表超过50或者更多时,此时用hGetAll函数便能很直观的看到性能问题,这里就不作数据分析了。

Redis是单线程的!当它处理一个请求时其他的请求只能等着。通常请求都会很快处理完,但是当我们使用HGETALL的时候,必须遍历每个字段来获取数据,这期间消耗的CPU资源和字段数成正比,如果还用了PIPELINING,无疑更是雪上加霜。
PERFORMANCE = CPUs / OPERATIONs
也就是说,此场景下为了提升性能,要么增加运算过程中的CPU数量;要么降低运算过程中的操作数量。在为了继续使用hash结构的数据,又要解决此问题,比较方便的方法就是将hash以序列化字符串存储,取的时候先取出反序列化的数据,再用hGet(key,array(hash..))。

例如:

$arrKey = array('dbfba184bef630526a75f2cd073a6098','dbfba184bef630526a75f2cd0dswet98')$strKey = 'test';$obj->hmGet($strKey,$arrKey);

把原本的hGetAll操作简化为hGet,也就是说,不再需要遍历hash中的每一个字段,因此即便不能让多个CPU参与运算,但是却大幅降低了操作数量,所以性能的提升仍然是显著的;当然劣势也很明显,和所有的冗余方式一样,此方案浪费了大量的内存。

有人会问,这样虽然没有了遍历字段的过程,但是却增加了反序列化的过程,而反序列化的成本往往也是很高的,难道这样也能提升性能?问题的关键在于开始我们遍历字段的操作是在一个cpu上完成的,后来反序列化的操作,不管是什么语言,都可以通过多进程或多线程来保证是在多个cpu上完成的,所以性能总体上是提升的。

另外,很多人直觉是通过运行redis多实例来解决问题。确实,这样可以增加运算过程中的CPU数量,有助于提升性能,但是需要注意的是,hGetAll和PIPELINING往往会让运算过程中的操作数量呈几何级爆炸式增长,相比之下,我们能增加的redis多实例数量简直就是杯水车薪,所以本例中这种方法不能彻底解决问题。

相关文章,请参考:http://www.redis.cn/topics/benchmarks.html

0 0