nginx+tomcat+redis+mysql搭建与优化

来源:互联网 发布:下载ai软件 编辑:程序博客网 时间:2024/06/09 18:33

高性能的服务器的架设

对于高性能网站 ,请求量大,如何支撑? 一方面,要减少请求 对于开发人员—-合并css, 背景图片, 减少mysql查询等.
2: 对于运维 nginx的expires ,利用浏览器缓存等,减少查询.
3: 利用cdn来响应请求
4: 最终剩下的,不可避免的请求—-服务器集群+负载均衡来支撑.
    所以,来到第4步后,就不要再考虑减少请求这个方向了. 而是思考如何更好的响应高并发请求.
    大的认识——-既然响应是不可避免的,我们要做的是把工作内容”平均”分给每台服务器. 最理想的状态 每台服务器的性能都被充分利用.

环境准备

win7 Intel i7 4710MQ CPU 2.50GHz 64bit 8G内存,系统安装了oracle、mysql、两台虚拟机(centos6,centos1安装了nginx、redis,centos2安装了httpd(使用ab进行http压力测试),sysbench)
环境部署

nginx

安装nginx我配置了一致性hash(与mem集成使用)、status(压测统计使用)、redis模块(与redis集成使用)

[root@centos1 nginx]# ./sbin/nginx -Vnginx version: nginx/1.10.2built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) configure arguments: --prefix=/usr/local/nginx --add-module=/usr/local/nginx/nginx_module_hash --with-http_stub_status_module --add-module=/usr/local/nginx/ngx_http_redis-0.3.8[root@centos1 nginx]# upstream zmy_zh {  server 192.168.1.100;}//redis集群,最好使用lua配合,进行redis集群的动态配置以及查找//lua暂未开始学习upstream redis_cluster {   server 192.168.106.130:7000;   server 192.168.106.130:7001;   server 192.168.106.130:7002;   #server 192.168.106.130:7003;   #server 192.168.106.130:7004;   #server 192.168.106.130:7005;   keepalive 100;}//memcached集群upstream mem_cluster {   //一致性哈希算法   consistent_hash $request_uri;   server 192.168.106.130:11211;   server 192.168.106.130:11212;   server 192.168.106.130:11213;}server {   listen       80;   server_name  192.168.106.130;   #charset koi8-r;   access_log  logs/zh.access.log  main;   error_log logs/zh.error.log info;   location ~ /redis {       proxy_pass http://zmy_zh;       proxy_redirect off;       proxy_set_header host 192.168.1.100;       proxy_set_header X-Real-IP $remote_addr;       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;       index succ.jsp;   }location ~ / {       set $redis_key "$uri";       redis_pass redis_cluster;       error_page 404 redis;//如果缓存未命中则去后台服务器查询   }  //压测时,hginx统计模块   location = /status {       stub_status on;       access_log off;   }   # redirect server error pages to the static page /50x.html   error_page   500 502 503 504  /50x.html;   location = /50x.html {       root   html;   }}

redis

使用的版本是redis3+,redis3以后提供了强大的集群机制以及自动容错机制,一些自定义的脚本

start.sh

#!/bin/bashcd /usr/local/redis/bin./redis-server ../cluster/7000/redis.conf./redis-server ../cluster/7001/redis.conf./redis-server ../cluster/7002/redis.conf./redis-server ../cluster/7003/redis.conf./redis-server ../cluster/7004/redis.conf./redis-server ../cluster/7005/redis.conf

cluster.sh,IP:PORT必须制定具体的IP地址,不能写成localhost或者127.0.0.1,因为在通过其他服务器或者客户端连接的时候,如果制定了localhost,那么读取到的集群信息将会是locahost:7001……,读取数据也是通过locahost连接读取,这样会导致服务根本不可用

Redis Cluster:Too many Cluster redirections错误

客户端会重试5次,超过5次出现上述错误,使用的是spring-data-redis+jedis,如果通过jedis直连,常见错误

Exception in thread "main" redis.clients.jedis.exceptions.JedisClusterException: No way to dispatch this command to Redis Cluster because keys have different slots.at redis.clients.jedis.JedisClusterCommand.run(JedisClusterCommand.java:46)at redis.clients.jedis.JedisCluster.sdiff(JedisCluster.java:1497)at com.redis.RedisClusterClient.main(RedisClusterClient.java:38)

redis集群,本实验为3master+3slaves,3台master会均匀分配2^14次方个slot,进行多个key查询需要计算出key在哪一台服务器才能进行查询,多个key查询使用for循环,springdataredis封装了jedis就是如此做的,但是在测试中发现了spring data redis一个bug,程序如下

redisTemplate.execute(new RedisCallback<Void>() {    @Override    public Void doInRedis(RedisConnection connection) {        //如果"".getBytes()不加则只会返回第一个集合的元素        Set<byte[]> set = connection.sDiff("tag:WEB".getBytes(),"tag:ruby".getBytes(),"".getBytes());        set.forEach((key) -> {            System.out.println("diff:" + new String(key));        });        return null;    }});//org.springframework.data.redis.connection.jedis.JedisClusterConnectionpublic Set<byte[]> sDiff(byte[]... keys) {    if (ClusterSlotHashUtil.isSameSlotForAllKeys(keys)) {        try {            return cluster.sdiff(keys);        } catch (Exception ex) {            throw convertJedisAccessException(ex);        }    }    byte[] source = keys[0];    //问题就出现在这个拷贝上面    byte[][] others = Arrays.copyOfRange(keys, 1, keys.length - 1);    ByteArraySet values = new ByteArraySet(sMembers(source));    Collection<Set<byte[]>> resultList = clusterCommandExecutor                .executeMuliKeyCommand(new JedisMultiKeyClusterCommandCallback<Set<byte[]>>() {    @Override    public Set<byte[]> doInCluster(Jedis client, byte[] key) {            return client.smembers(key);        }    }, Arrays.asList(others)).resultsAsList();    if (values.isEmpty()) {        return Collections.emptySet();    }    for (Set<byte[]> singleNodeValue : resultList) {        values.removeAll(singleNodeValue);    }    return values.asRawSet();}
cd ../src./redis-trib.rb create --replicas 1 192.168.106.130:7000 192.168.106.130:7001 192.168.106.130:7002 192.168.106.130:7003 192.168.106.130:7004 192.168.106.130:7005

第一次创建集群需要执行,下一次不需要执行否则会出现如下错误

>>> Creating cluster[ERR] Node 192.168.106.130:7000 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.

因为关闭虚拟机后,再次执行start.sh之后,启动的redis服务会读取nodes文件信息,进行集群恢复

# Every cluster node has a cluster configuration file. This file is not# intended to be edited by hand. It is created and updated by Redis nodes.# Every Redis Cluster node requires a different cluster configuration file.# Make sure that instances running in the same system do not have# overlapping cluster configuration file names.#cluster-config-file nodes-7000.conf-rw-r--r--. 1 root root     769 Nov  4 20:14 nodes-7000.conf-rw-r--r--. 1 root root     769 Nov  4 20:14 nodes-7001.conf-rw-r--r--. 1 root root     769 Nov  4 20:14 nodes-7002.conf-rw-r--r--. 1 root root     769 Nov  4 20:14 nodes-7003.conf-rw-r--r--. 1 root root     769 Nov  4 20:14 nodes-7004.conf-rw-r--r--. 1 root root     769 Nov  4 20:14 nodes-7005.conf

stop.sh

#!/bin/bashcd /usr/local/redis/bin/./redis-cli -h 127.0.0.1 -p 7000 shutdown./redis-cli -h 127.0.0.1 -p 7001 shutdown./redis-cli -h 127.0.0.1 -p 7002 shutdown./redis-cli -h 127.0.0.1 -p 7003 shutdown./redis-cli -h 127.0.0.1 -p 7004 shutdown./redis-cli -h 127.0.0.1 -p 7005 shutdown

tomcat

安装于window下,略

mysql

mysql5.7,安装于window下,略
虚拟机无法正常连接访问mysql的问题

INSERT INTO `user` VALUES ('%', 'root', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', '', '', '', '', '0', '0', '0', '0', 'mysql_native_password', 0x2A38314635453231453335343037443838344136434434413733314145424642364146323039453142, 'N', '2016-11-06 13:57:24', null, 'N');update `user` set authentication_string=password('123') where user='root';flush PRIVILEGES;

笔者实验过一种情况,如下

host user % root locahost root

如果把表中数据host为localhost的删除,执行flush PRIVILEGES,然后重新连接mysql,抛异常,同时通过java得mysql驱动程序连接也无法获取连接

ERROR 2003: Can't connect to MySQL server on 'localhost@XXX' (10061) use password

解决如下

C:\Program Files\MySQL\MySQL Server 5.7\bin>mysqld --defaults-file="C:\ProgramData\MySQL\MySQL Server 5.7\my.ini" --console --skip-grant-tables………………………………

提示启动成功后,客户端连接mysql,插入sql

INSERT INTO `user` VALUES ('localhost', 'root', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', '', '', '', '', '0', '0', '0', '0', 'mysql_native_password', 0x2A38314635453231453335343037443838344136434434413733314145424642364146323039453142, 'N', '2016-11-06 13:57:24', null, 'N');update `user` set authentication_string=password('123') where user='root';flush PRIVILEGES;

services.msc,重新启动mysql服务即可正常运行

数据库连接池链接mysql报错,链接为jdbc:mysql://localhost:3306/imooc

ERROR: The server time zone value '�й���׼ʱ��' is unrecognized or represents more than one time zone............

增加参数解决,注意如果直接写在xml里面&必须换成’&amp’;

jdbc:mysql://localhost:3306/imooc?useUnicode=true&amp;serverTimezone=UTC&amp;characterEncoding=UTF-8

高并发思路

在linux环境下,要达到高并发,也就是服务器能可接受的请求数多,可持有的fd数足够;满足了这个两个方面硬件的要求,咱们的程序才能提供更优质的服务

socket链接

系统硬件环境优化,最大链接数、加快TCP链接的回收、空的cpu是否允许回收利用、洪水攻击(DOS攻击),下面是修改的脚本

[root@centos1 nginx]# cat tcpopt.sh #!/bin/bashecho 50000 > /proc/sys/net/core/somaxconn echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse echo 0 > /proc/sys/net/ipv4/tcp_syncookies

nginx是master-worker运行模式,默认情况下子进程为1,改为与cpu等数量的worker数量,以及绑定cpu(减少workder进程间切换损耗);子进程允许打开的链接数

worker_processes     4;worker_cpu_affinity 0001 0010 0100 1000;events {    worker_connections  10240;}

fd文件描述符

系统默认允许持有的fd数量为1024,明显不够,对于fd的最大数量限制(总之一般的服务器都可以设置满足要求)

/etc/security/limits.conf配置,*代表所有用户* soft nofile 2000000* hard nofile 2000000* soft nproc 2000000* hard nproc 2000000

nginx允许子进程打开的链接数设置

worker_rlimit_nofile 10000;

mysql优化

待补充

tomcat优化

待补充

redis优化

待补充

ab性能测试

待补充

1 0