Apache+Tomcat整合与压测验证

来源:互联网 发布:页面优化 编辑:程序博客网 时间:2024/06/05 18:31
一、背景
公司产品的后台管理系统是通过java开发的,在17年之前直接使用tomcat作为唯一的webserver。但是tomcat是一个轻量级的应用服务器,在并发访问量不高的情况下使用完全没有问题。随着公司的发展,终端从服务器上获取的数据量越来越大,遇到越来越多的终端数600以上的项目,还遇到由于酒店年度切换供电线路或意外断电导致的所有机顶盒同时重启,还存在人为地通过后台控制对所有终端进行重启操作。这些都会导致服务器并发处理压力过大,影响终端正常从服务器拿数据。因此,需要对服务器环境进行优化,优化的方案是增加apache,提升并发处理能力。
apache+tomcat整合的方法,度娘一下一大堆帖子,比如http://www.cnblogs.com/leslies2/archive/2012/07/23/2603617.html,这里就不多说了。方法的基本原理是,系统中增加一个mod_jk.so动态链接库文件,这个文件是用来连接apache和tomcat的。

二、初步方案
前期好多工作是由同事完成,这里我要非常感谢同事的前期工作,很多基础工作完成的非常好,使得整合后的系统处于可以正常运行的状态。后期我接手,还是不能免俗,借鉴(抄袭)了网络上的帖子,比如上面的链接和同事的劳动成果,在对mod_jk进行配置的时候,将所有请求交给tomcat去解析。当然,已经实现了整合的要求,先不说缺点,先说说是怎么干的,看看mod_jk.conf文件就知道了。
这种配置方式,肯定可以使用,但是没有发挥出apache的应有作用,只是让apache过了一手,压力还是在tomcat身上。空口无凭,后面有压测为证。

三、方案优化
如果让Apache直接去处理静态文件,这样既能发挥apache的并发处理能力,又能降低tomcat的压力,岂不是很好?心里一直有这个想法,终于付诸实际行动。其实apache官网上有说明,只是需要我们去理解,英文费些事儿而已。
其实就是上面一段话和一小段例子。行了,配置搞定,开始修改。
我把我们产品中最常见的,同时数据流量最大的集中文件,直接交给apache去处理。这样就可以了吗?试试就知道了,保证给你报404 error!还有个问题,我们之前需要http请求的数据,都在tomcat根目录下,apache看不到啊,得让apache能看到这些文件或路径,全复制过去肯定不是办法,后期更新数据也是问题。这里用软链接,具体不用解释了。我正式的配置文件中,没有.json文件,也就是说上面的配置中,json文件由apache交给tomcat去处理。现在通过压力测试去看看结果(在本地服务器使用ab工具压测):
红色字体是apache交给tomcat处理的测试:
root@APP-Server bin]# ab -c 200 -n 2000 http://172.16.1.7/ios/rules/rules-0303.json
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 172.16.1.7 (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests


Server Software: Apache/2.2.15
Server Hostname: 172.16.1.7
Server Port: 80

Document Path: /ios/rules/rules-0303.json
Document Length: 62224 bytes

Concurrency Level: 200
Time taken for tests: 2.436 seconds
Complete requests: 2000
Failed requests: 0
Write errors: 0
Total transferred: 124980192 bytes
HTML transferred: 124455930 bytes
Requests per second: 821.10 [#/sec] (mean)
Time per request: 243.575 [ms] (mean)
Time per request: 1.218 [ms] (mean, across all concurrent requests)
Transfer rate: 50108.15 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 22.3 2 1000
Processing: 2 92 381.9 30 2431
Waiting: 1 90 382.2 28 2431
Total: 18 94 382.7 31 2434

Percentage of the requests served within a certain time (ms)
50% 31
66% 35
75% 38
80% 39
90% 40
95% 42
98% 2430
99% 2432
100% 2434 (longest request)

绿色字体是json文件直接由apache去处理的测试(json文件目前没有放到发布版中):
[root@APP-Server bin]# ab -c 200 -n 2000 http://172.16.1.7/ios/rules/rules-0303.json
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 172.16.1.7 (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests


Server Software: Apache/2.2.15
Server Hostname: 172.16.1.7
Server Port: 80

Document Path: /ios/rules/rules-0303.json
Document Length: 62224 bytes

Concurrency Level: 200
Time taken for tests: 0.210 seconds
Complete requests: 2000
Failed requests: 0
Write errors: 0
Total transferred: 125042490 bytes
HTML transferred: 124510224 bytes
Requests per second: 9534.07 [#/sec] (mean)
Time per request: 20.977 [ms] (mean)
Time per request: 0.105 [ms] (mean, across all concurrent requests)
Transfer rate: 582111.26 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.5 0 2
Processing: 2 14 33.2 8 206
Waiting: 2 13 33.2 8 206
Total: 4 14 33.5 8 208

Percentage of the requests served within a certain time (ms)
50% 8
66% 9
75% 9
80% 9
90% 9
95% 9
98% 205
99% 208
100% 208 (longest request)

OK,到了这里,数据不会骗人,静态文件用apache直接解析会缩短非常多,实际使用过程中这个数据还会更可观,因为tomcat在处理更多并发的时候会力不从心,延迟非常大。
1 0
原创粉丝点击