Tengine 的安装与使用

来源:互联网 发布:广州茶楼你知多少钱 编辑:程序博客网 时间:2024/06/03 17:05

0. 环境和参考文档

centos-6.5
Tengine 官网

1. 安装依赖

yum -y groupinstall "Development tools"yum -y groupinstall "Server Platform Development"yum -y install pcre-devel## 编译时候提示安装的依赖yum install -y libxslt-develyum install -y gd-develyum install -y lua-devel## GeoIP  用户访问来源cd /usr/local/srcwget http://pkgs.repoforge.org/geoip/geoip-devel-1.4.6-1.el6.rf.x86_64.rpmwget http://pkgs.repoforge.org/geoip/geoip-1.4.6-1.el6.rf.x86_64.rpmrpm -ivh geoip-devel-1.4.6-1.el6.rf.x86_64.rpmrpm -ivh geoip-1.4.6-1.el6.rf.x86_64.rpm

添加Tengine的运行时用户

useradd -r tengine

编译安装

./configure \--prefix=/usr/local/tengine \--sbin-path=/usr/local/tengine/sbin/tengine \--conf-path=/usr/local/tengine/conf/tengine.conf \--error-log-path=/usr/local/tengine/logs/error.log \--http-log-path=/usr/local/tengine/logs/access.log \--pid-path=/var/run/tengine/tengine.pid \--lock-path=/var/lock/tengine.lock \--user=tengine \--group=tengine \--with-http_ssl_module \--enable-mods-shared=all \--dso-path=/usr/local/tengine/dso \--without_ngx_http_lua_module

将启动脚本引入环境变量

vim /etc/profile.d/tengine.shexport PATH=/usr/local/tengine/sbin:$PATHreboot

配置

vim /usr/local/tengine/conf/tengine.conf## 设置开启的worker数量为auto  (global context)  `cat /proc/cpuinfo`worker_processes auto;## 绑定cpu亲缘   (global context)worker_cpu_afinity auto;## 定义每个worker进程可以打开的文件句柄参数  (global context)worker_rlimit_nofile 51200## 设置每个worker的最大连接数 (events context)worker_connections 51200

启动 | 停止tengine

tengine -c /usr/local/tengine/conf/tengine.conftengine -s stop

做个压力测试

碰到 apr_socket_recv: Connection reset bu peer (104) 错误的解决

ulimit -n 51200

开始测试
/usr/local/apache-2.2.27/bin/ab -c 2000 -n 20000 http://192.168.1.101/index.html
在被测机器使用 top htop vmstat 查看cpu负载和wa
vmstat 1 看cpu wa 数值

开启Tengine的健康监测

需要动态的加载sysguard模块
这样当cpu负载到达1.1之后, Tengine停止服务,并将请求转发至loadlimitaction

## server http contextdso {    ngx_http_sysguard_module.so;}http {    sysguard on;    sysguard_load load=1.1 action=/loadlimit;    server {        location /loadlimit {            return 503;        }    }}

集群中upstream 的健康状态监测配置

http {    upstream cluster1 {        # simple round-robin        consistent_hash $request_uri;        server 192.168.0.1:80 id=1001;        server 192.168.0.2:80 id=1002;        check interval=3000 rise=2 fall=5 timeout=1000 type=http;        check_http_send "HEAD / HTTP/1.0\r\n\r\n";        check_http_expect_alive http_2xx http_3xx;    }    server {        listen 80;        location /1 {            proxy_pass http://cluster1;        }        location /status {            check_status;            access_log   off;        }    }}

查看集群状态信息

http://127.0.0.1/status
检查失败的图片

0 0
原创粉丝点击