redis测试

来源:互联网 发布:java设计小游戏代码 编辑:程序博客网 时间:2024/06/16 15:24

import io.codis.jodis.JedisResourcePool;import io.codis.jodis.RoundRobinJedisPool;import redis.clients.jedis.*;import java.io.IOException;/** * Created by mike on 2017/7/21. */public class CodisUtils {        public static void main(String[] args) throws IOException {        JedisResourcePool jedisPool = RoundRobinJedisPool.create()                .curatorClient("10.154.247.74:2181", 30000).zkProxyDir("/jodis/codis-demo").build();        int[] servers = {6381,6382,6383,6384};        for(int i=0;i<20;i++) {            Jedis jedis = new Jedis("10.154.247.74",6381);            new Thread(new Codis(jedis)).start();        }    }}class Codis implements Runnable{    Jedis jedis;    JedisResourcePool pool;    int[] servers;    public Codis(Jedis jedis) {        this.jedis = jedis;    }    public Codis(JedisResourcePool pool) {        this.pool = pool;    }    @Override    public void run() {        Long t1 = System.currentTimeMillis();        String name = Thread.currentThread().getName();        System.out.println("thread start======"+name);        for (int i = 10000; i >0; i--) {            jedis.set(name + i, "bar" + i);            jedis.get(name + i);//            jedis.close();//            String value = jedis.get(name+ i);        }        long t2 = System.currentTimeMillis();        System.out.println("thread end in======"+(t2 - t1) / 1000.0);    }}

1 单机单线程测试

server在本地,处理1万条,速度接近3万读写每秒:

thread end in======0.361

server在网络,处理1万条,速度只有4000读写左右:

thread end in======2.824

2 单机多线程测试,主要代码如下:

server在本地,开10个线程(每个线程处理1万条读写) ,测试结果,达到6万条每秒的读写:

thread end in======1.586thread end in======1.585thread end in======1.592thread end in======1.608thread end in======1.611thread end in======1.608thread end in======1.609thread end in======1.62thread end in======1.645thread end in======1.748

server在网络上另外一台机器,开10个线程(每个线程处理1万条读写),测试结果,达到3万条每秒的读写:

thread end in======3.67thread end in======3.76thread end in======3.799thread end in======3.802thread end in======3.796thread end in======3.804thread end in======3.812thread end in======3.834thread end in======3.856thread end in======3.905

server在本地,开20个线程(每个线程处理1万条读写) ,测试结果,达到7万条每秒的读写:

thread end in======2.945thread end in======2.946thread end in======2.942thread end in======3.0thread end in======3.004thread end in======3.013thread end in======3.01thread end in======3.018thread end in======3.015thread end in======3.015thread end in======3.017thread end in======3.041thread end in======3.047thread end in======3.06thread end in======3.121thread end in======3.129thread end in======3.137thread end in======3.145thread end in======3.204thread end in======3.203

server在网络上另外一台机器,开20个线程(每个线程处理1万条读写),测试结果,达到4万条每秒的读写:

thread end in======4.6thread end in======4.598thread end in======4.604thread end in======4.63thread end in======4.644thread end in======4.646thread end in======4.679thread end in======4.694thread end in======4.695thread end in======4.709thread end in======4.731thread end in======4.737thread end in======4.752thread end in======4.767thread end in======4.79thread end in======4.837thread end in======4.862thread end in======4.874thread end in======4.893thread end in======4.961

多server多线程测试

2个server,20个线程,测试:

public static void main(String[] args) throws IOException {    JedisResourcePool jedisPool = RoundRobinJedisPool.create()            .curatorClient("10.154.247.74:2181", 30000).zkProxyDir("/jodis/codis-demo").build();    int[] servers = {6381,6382};    for(int i=0;i<20;i++) {        Jedis jedis = new Jedis("10.154.247.74",servers[i%2]);        new Thread(new Codis(jedis)).start();    }}
server在本地,速度提高到10万读写每秒:

thread end in======1.672
thread end in======1.737
thread end in======1.741
thread end in======1.75
thread end in======1.752
thread end in======1.769
thread end in======1.789
thread end in======1.792
thread end in======1.796
thread end in======1.802
thread end in======1.821
thread end in======1.838
thread end in======1.854
thread end in======1.863
thread end in======1.876
thread end in======1.881
thread end in======1.906
thread end in======1.913
thread end in======1.946
thread end in======2.001

server在网络,速度基本没变化:

thread end in======4.182
thread end in======4.203
thread end in======4.22
thread end in======4.226
thread end in======4.294
thread end in======4.314
thread end in======4.329
thread end in======4.34
thread end in======4.351
thread end in======4.364
thread end in======4.368
thread end in======4.378
thread end in======4.382
thread end in======4.393
thread end in======4.403
thread end in======4.412
thread end in======4.418
thread end in======4.438
thread end in======4.462
thread end in======4.469

在两台机器上分别同时开20个线程。

server在本地的执行情况:

thread end in======1.588
thread end in======1.617
thread end in======1.637
thread end in======1.664
thread end in======1.716
thread end in======1.719
thread end in======1.795
thread end in======1.798
thread end in======1.832
thread end in======1.959
thread end in======2.235
thread end in======2.282
thread end in======2.328
thread end in======2.382
thread end in======2.592
thread end in======2.649
thread end in======2.7
thread end in======2.702
thread end in======2.71
thread end in======2.716

server在网络的执行情况:

thread end in======4.731
thread end in======4.765
thread end in======4.833
thread end in======4.842
thread end in======4.856
thread end in======4.859
thread end in======4.869
thread end in======4.909
thread end in======4.912
thread end in======4.924
thread end in======4.938
thread end in======4.963
thread end in======4.96
thread end in======4.977
thread end in======5.022
thread end in======5.034
thread end in======5.061
thread end in======5.093
thread end in======5.201
thread end in======5.203

平均起来达到了12万条每秒读写。但若按最晚的时间来计算的话是接近8万读写每秒。

另外codis因为使用了代理,同样线程机器数量的情况下速度是比不上这种直接指定机器的方式的。




+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

分割线

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

.主从数据同步测试

group1

Redis-cli -h 192.168.30.113 -p 6379

auth xxx

192.168.30.113:6379> auth xxxx

OK

192.168.30.113:6379> set a 1

OK

redis-cli -h 192.168.30.115 -p 6380

192.168.30.115:6380> auth xxx

OK

192.168.30.115:6380> get a

"1"

 

group2

redis-cli -h 192.168.30.114 -p 6379

auth xxx

192.168.30.114:6379> set b 2

OK

redis-cli -h 192.168.30.113 -p 6380

auth xxx

192.168.30.113:6380> get b

"2"

 

group3:

redis-cli -h 192.168.30.115 -p 6379

auth xxx

192.168.30.115:6379> set c 3

OK

redis-cli -h 192.168.30.114 -p 6380

auth xxx

192.168.30.114:6380> get c

"3"

 

三.高可用测试

1.proxy1 offline(192.168.30.113)后,验证通过proxy2(192.168.30.114)19000 端口访问redis

关闭proxy1:

[codisapp@mvxl2579 codis]$ chmod u+x *

[codisapp@mvxl2579 codis]$ ps -ef|grep proxy

codisapp     10479  8345  0 13:36 pts/0    00:00:00 grep proxy

codisapp     30898     1  0 May31 ?        00:08:46 /codisapp/svr/codis/bin/codis-proxy --ncpu=4 --config=/codisapp/conf/codis/proxy.toml --log=/codisapp/logs/codis/proxy.log --log-level=WARN

[codisapp@mvxl2579 codis]$ ./stop_codis_proxy.sh

[codisapp@mvxl2579 codis]$ ps -ef|grep proxy

codisapp     10520  8345  0 13:36 pts/0    00:00:00 grep proxy

 

[codisapp@mvxl2580 ~]$ redis-cli -h 192.168.30.114 -p 19000

192.168.30.114:19000> auth xxxx

OK

192.168.30.114:19000> set d 4

OK

192.168.30.114:19000> get d

"4"

2.proxy1 offline后,且将codis server163796380关掉,验证通过proxy319000 端口访问redis

再接着关闭server 1redis:

[codisapp@mvxl2579 codis]$ ps -ef|grep codis-server

codisapp     10584  8345  0 13:44 pts/0    00:00:00 grep codis-server

codisapp     28919     1  0 May31 ?        00:01:28 /codisapp/svr/codis/bin/codis-server *:6379                        

codisapp     29151     1  0 May31 ?        00:01:18 /codisapp/svr/codis/bin/codis-server *:6380                        

[codisapp@mvxl2579 codis]$ ./stop_codis_server.sh

[codisapp@mvxl2579 codis]$ ps -ef|grep codis-server

codisapp     10626  8345  0 13:44 pts/0    00:00:00 grep codis-server

通过proxy3访问正常:

[codisapp@mvxl2580 ~]$ redis-cli -h 192.168.30.115 -p 19000

192.168.30.115:19000> auth xxxxxxx

OK

192.168.30.115:19000> set e 5

OK

192.168.30.115:19000> get e

"5"

3. HA主从自动切换测试

 server group 3192.168.30.115)上的6379进程kill,查看HA是否能自动实现主从切换,及原主库启动后,如何处理?

mvxl2530主机上将6380对应的进程kill

[codisapp@mvxl2581 codis]$ ps -ef|grep codis-server

codisapp       813   454  0 15:58 pts/0    00:00:00 grep codis-server

codisapp     21682     1  0 May31 ?        00:01:17 /codisapp/svr/codis/bin/codis-server *:6379                        

codisapp     21701     1  0 May31 ?        00:01:14 /codisapp/svr/codis/bin/codis-server *:6380                        

[codisapp@mvxl2581 codis]$ kill 21682

[codisapp@mvxl2581 codis]$ ps -ef|grep codis-server

codisapp       849   454  0 15:58 pts/0    00:00:00 grep codis-server

codisapp     21701     1  0 May31 ?        00:01:14 /codisapp/svr/codis/bin/codis-server *:6380

 

以下显示:group 3中的从库已有自动提升为主库,并且原主库已下线,从组中剔除.

下线的原主库redis需要先启动服务,再重新加入到组中,并点击帮手小图标进行同步:

三.大数据量下的主从同步测试

插入20Wkey(每次测试运行前,需要更改INSTANCE_NAME)

vim redis-key.sh

#!/bin/bash

REDISCLT="redis-cli -h 192.168.30.113 -p 19000 -a "xxxxxxx" -n 0 set"

ID=1

 

while [ $ID -le 50000 ]

do

 INSTANCE_NAME="i-2-$ID-VM"

 UUID=`cat /proc/sys/kernel/random/uuid`

 CREATED=`date "+%Y-%m-%d %H:%M:%S"`

 $REDISCLT vm_instance:$ID:instance_name "$INSTANCE_NAME"

 $REDISCLT vm_instance:$ID:uuid "$UUID"

 $REDISCLT vm_instance:$ID:created "$CREATED"

 $REDISCLT vm_instance:$INSTANCE_NAME:id "$ID"

 ID=`expr $ID + 1`

done

 

执行上面脚本,

查看面板,各组主从数据同步正常,QPS达到了800以上,显示如下:

 

 

 

压力测试

1/codisapp/svr/codis/bin/redis-benchmark -h 192.168.30.113 -p 19000 -a “xxxxxx” -c 100 -n 100000

100个并发连接,100000个请求,检测hostlocalhost 端口为6379redis服务器性能

以下显示100%的请求都在15ms13ms1ms内处理完成。

 [codisapp@mvxl2579 codis]$ /codisapp/svr/codis/bin/redis-benchmark -h 192.168.30.113 -p 19000 -a "xxxxxxx" -c 100 -n 100000

====== PING_INLINE ======

  100000 requests completed in 0.83 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 1 milliseconds

120627.27 requests per second

 

====== PING_BULK ======

  100000 requests completed in 0.79 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 20 milliseconds

127064.80 requests per second

 

====== SET ======

  100000 requests completed in 1.08 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 14 milliseconds

92506.94 requests per second

 

====== GET ======

  100000 requests completed in 1.02 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 15 milliseconds

97560.98 requests per second

 

====== INCR ======

  100000 requests completed in 0.80 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 19 milliseconds

124843.95 requests per second

 

====== LPUSH ======

  100000 requests completed in 0.81 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 18 milliseconds

123762.38 requests per second

 

====== LPOP ======

  100000 requests completed in 0.82 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 6 milliseconds

121212.12 requests per second

 

====== SADD ======

  100000 requests completed in 0.93 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 6 milliseconds

108108.11 requests per second

 

====== SPOP ======

  100000 requests completed in 0.91 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 16 milliseconds

110011.00 requests per second

 

====== LPUSH (needed to benchmark LRANGE) ======

  100000 requests completed in 0.88 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 15 milliseconds

114155.25 requests per second

 

====== LRANGE_100 (first 100 elements) ======

  100000 requests completed in 3.76 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 19 milliseconds

26588.67 requests per second

 

====== LRANGE_300 (first 300 elements) ======

  100000 requests completed in 11.85 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 53 milliseconds

8438.82 requests per second

 

====== LRANGE_500 (first 450 elements) ======

  100000 requests completed in 17.56 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 61 milliseconds

5693.46 requests per second

 

====== LRANGE_600 (first 600 elements) ======

  100000 requests completed in 22.23 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 74 milliseconds

4499.44 requests per second

 

====== MSET (10 keys) ======

  100000 requests completed in 3.24 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

100.00% <= 20 milliseconds

30873.73 requests per second

2/codisapp/svr/codis/bin/redis-benchmark -h 192.168.30.113 -p 19000 -a "xxxxxxx"  -q -d 100

测试存取大小为100字节的数据包的性能

[codisapp@mvxl2579 codis]$ /codisapp/svr/codis/bin/redis-benchmark -h 192.168.30.113 -p 19000 -a "xxxxxxx"  -q -d 100

PING_INLINE: 124223.60 requests per second

PING_BULK: 129533.68 requests per second

SET: 59772.86 requests per second

GET: 68446.27 requests per second

INCR: 104166.67 requests per second

LPUSH: 121359.23 requests per second

LPOP: 125156.45 requests per second

SADD: 78926.60 requests per second

SPOP: 79113.92 requests per second

LPUSH (needed to benchmark LRANGE): 125628.14 requests per second

LRANGE_100 (first 100 elements): 19149.75 requests per second

LRANGE_300 (first 300 elements): 6358.90 requests per second

LRANGE_500 (first 450 elements): 4207.87 requests per second

LRANGE_600 (first 600 elements): 3238.34 requests per second

MSET (10 keys): 23430.18 requests per second 

3/codisapp/svr/codis/bin/redis-benchmark -h 192.168.30.113 -p 19000 -a "xxxxxxx"  -t set,lpush -n 100000 –q

只测试部分操作性能。

[codisapp@mvxl2579 codis]$ /codisapp/svr/codis/bin/redis-benchmark -h 192.168.30.113 -p 19000 -a "xxxxxxx"  -t set,lpush -n 100000 -q

SET: 65231.57 requests per second

LPUSH: 119189.52 requests per second