nginx+tomcat

来源:互联网 发布:如何计算矩阵一致性 编辑:程序博客网 时间:2024/05/21 11:35

实验环境

操作系统

主机名

IP

软件

CentOS 6.5

nginx1

192.168.200.101

VIP:192.168.200.253

Nginx+keepalived

CentOS 6.5

nginx2

192.168.200.102

VIP:192.168.200.254

Nginx+keepalived

CentOS 6.5

tomcat1

192.168.200.103

Tomcat+memcached

CentOS 6.5

tomcat2

192.168.200.104

Tomcat+memcached

配置过程

统一配置

[root@nginx1 ~]# cat /etc/hosts

192.168.200.101 nginx1

192.168.200.102 nginx2

192.168.200.103 tomcat1

192.168.200.104 tomcat2

nginx1服务器部署

配置主机名

[root@localhost ~]# hostname nginx1

[root@localhost ~]# bash

yum安装nginx

[root@nginx1 ~]# yum -y install pcre-devel

[root@nginx1 ~]# rpm -ivhnginx-1.8.0-1.el6.x86_64_.rpm

Preparing...             ###########################################[100%]

   1:nginx             ########################################### [100%]

查找主配置文件

[root@nginx1 ~]# find / -type f -name"nginx.conf"

/etc/nginx/nginx.conf

/application/nginx-1.8.0/conf/nginx.conf

修改主配置文件

[root@nginx1 ~]# vim/application/nginx/conf/nginx.conf

keepalive_timeout 65;

 

        upstream tomcat_server {

        server 192.168.200.103:8080weight=1;

        server 192.168.200.104:8080weight=1;

         }

 

    gzip  on;

 

    server {

       listen       80;

       server_name  localhost;

 

        location/ {

           root   html;

           index  index.html index.htm;

            proxy_pass http://tomcat_server;

        }

}

检查语法

[root@nginx1 ~]# /application/nginx/sbin/nginx -t

nginx: the configuration file/application/nginx-1.8.0/conf/nginx.conf syntax is ok

nginx: configuration file/application/nginx-1.8.0/conf/nginx.conf test is successful

启动nginx

[root@nginx1 ~]# /application/nginx/sbin/nginx -c/application/nginx/conf/nginx.conf

[root@nginx1 ~]# netstat -anpt | grep nginx

tcp       0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      2730/nginx    

nginx2配置和nginx1相同

双VIP负载均衡

工作原理:

两台Nginx通过Keepalived生成二个实例,二台NginxVIP互为备份,任何一台Nginx机器如果发生硬件损坏,Keepalived会自动将它的VIP地址切换到另一台机器,不影响客户端的访问。

在nginx1/2上编译安装keepalived服务:

[root@nginx1 ~]# yum -y install kernel-developenssl-devel

[root@nginx1 ~]# tar xf keepalived-1.2.13.tar.gz

[root@nginx1 ~]# cd keepalived-1.2.13

[root@nginx1 keepalived-1.2.13]#./configure --prefix=/--with-kernel-dir=/usr/src/kernels/2.6.32-431.el6.x86_64/ && make&& make install

[root@nginx1 keepalived-1.2.13]# chkconfig --addkeepalived

[root@nginx1 keepalived-1.2.13]# chkconfig keepalivedon

[root@nginx1 keepalived-1.2.13]# chkconfig --listkeepalived

keepalived         0:关闭     1:关闭     2:启用     3:启用     4:启用     5:启用     6:关闭

修改keepalived配置文件

nginx1

[root@nginx1 keepalived-1.2.13]# vim/etc/keepalived/keepalived.conf

! Configuration File for keepalived

 

global_defs {

  notification_email {

       wolf@163.com

   }

   smtp_server127.0.0.1

  smtp_connect_timeout 30

   router_idLVS_DEVEL

}

 

vrrp_instance VI_1 {

    state BACKUP

    interfaceeth0

   virtual_router_id 51

    priority 50

    advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

    virtual_ipaddress{

        192.168.200.254

    }  

}

 

vrrp_instanceVI_2 {

    state MASTER

    interface eth0

    virtual_router_id 52

    priority 100

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        192.168.200.253

    }

}

 

nginx2

! Configuration File for keepalived

 

global_defs {

  notification_email {

       wolf@163.com

   }

   smtp_server127.0.0.1

  smtp_connect_timeout 30

   router_idLVS_DEVEL

}

 

vrrp_instance VI_1 {

    state MASTER

    interfaceeth0

   virtual_router_id 51

    priority 100

    advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

        192.168.200.254

    }

}

 

vrrp_instance VI_2 {

    state BACKUP

    interfaceeth0

   virtual_router_id 52

    priority 50

    advert_int 1

   authentication {

       auth_type PASS

       auth_pass 1111

    }

   virtual_ipaddress {

        192.168.200.254

    }

}

启动keepalived

[root@nginx1 keepalived-1.2.13]# service keepalivedstart

正在启动 keepalived:                                      [确定]

 

[root@nginx2 keepalived-1.2.13]# service keepalivedstart

正在启动 keepalived:                                      [确定]

查看VIP效果

[root@nginx1 keepalived-1.2.13]# ip addr show dev eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu1500 qdisc pfifo_fast state UP qlen 1000

    link/ether00:0c:29:7a:34:c4 brd ff:ff:ff:ff:ff:ff

    inet 192.168.200.101/24 brd192.168.200.255 scope global eth0

    inet 192.168.200.253/32 scopeglobal eth0

    inet6fe80::20c:29ff:fe7a:34c4/64 scope link

       valid_lftforever preferred_lft forever

 

 [root@nginx2keepalived-1.2.13]# ip addr show dev eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu1500 qdisc pfifo_fast state UP qlen 1000

    link/ether00:0c:29:42:62:29 brd ff:ff:ff:ff:ff:ff

    inet 192.168.200.102/24 brd192.168.200.255 scope global eth0

    inet 192.168.200.254/32 scopeglobal eth0

    inet6fe80::20c:29ff:fe42:6229/64 scope link

       valid_lftforever preferred_lft forever

安装配置JDK和Tomcat服务器:

[root@tomcat1 ~]# rm -rf $(which java)

[root@tomcat1 ~]# tar xf jdk-7u65-linux-x64.tar.gz

[root@tomcat1 ~]# mv jdk1.7.0_65/ /usr/local/java

[root@tomcat1 ~]# vim /etc/profile

添加                                    

export JAVA_HOME=/usr/local/java

exportPATH=$PATH:$JAVA_HOME/bin

[root@tomcat1 ~]# source /etc/profile

[root@tomcat1 ~]# java -version

java version "1.7.0_65"

Java(TM) SE Runtime Environment (build 1.7.0_65-b17)

Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04,mixed mode)

 

[root@tomcat1 ~]# tar xf apache-tomcat-7.0.54.tar.gz

[root@tomcat1 ~]# mv apache-tomcat-7.0.54 /usr/local/tomcat7

[root@tomcat1 ~]# /usr/local/tomcat7/bin/startup.sh

Using CATALINA_BASE:  /usr/local/tomcat7

Using CATALINA_HOME:   /usr/local/tomcat7

Using CATALINA_TMPDIR: /usr/local/tomcat7/temp

Using JRE_HOME:       /usr/local/java

Using CLASSPATH:      /usr/local/tomcat7/bin/bootstrap.jar:/usr/local/tomcat7/bin/tomcat-juli.jar

Tomcat started.

[root@tomcat1 ~]# netstat -anpt |grep :8080

tcp       0      0 0.0.0.0:8080                0.0.0.0:*                   LISTEN      2734/java          

打开浏览器测试tomcat是否安装成功

建立java的web站点:

首先在跟目录下建立一个webapp目录,用于存放网站文件

[root@tomcat-1 ~]# mkdir /webapp

 

在webapp1目录下建立一个index.jsp的测试页面

[root@tomcat-1 ~]# vim /webapp/index.jsp

ServerInfo:  

SessionID:<%=session.getId()%>

<br>

SessionIP:<%=request.getServerName()%> 

<br>

SessionPort:<%=request.getServerPort()%>

<br>

<%

  out.println("server one");

%>

修改Tomcat的server.xml文件

定义一个虚拟主机,并将网站文件路径指向已经建立的/webapp,在host段增加context段

[root@tomcat1 ~]# cp/usr/local/tomcat7/conf/server.xml{,.bak}

[root@tomcat1 ~]# vim/usr/local/tomcat7/conf/server.xml

124      <Host name="localhost" appBase="webapps"

125            unpackWARs="true" autoDeploy="true">

126            <Context docBase="/webapp" path="" reloadable="flase" >

127            </Context>

docBase="/webapp"          #web应用的文档基准目录

path=""                                #设置默认"类"

reloadable="flase"              #设置监视"类"是否变化

重新启动tomcat测试

Tomcat 2 配置方法基本与Tomcat 1 相同

安装JDK,配置Java环境,版本与Tomcat 1 保持一致

安装Tomcat,版本与Tomcat 1 保持一致

[root@tomcat-2 ~]# vim /webapp/index.jsp

ServerInfo:  

SessionID:<%=session.getId()%>

<br>

SessionIP:<%=request.getServerName()%> 

<br>

SessionPort:<%=request.getServerPort()%>

<br>

<%

  out.println("server two");

%>

 

[root@tomcat-2 ~]# cp/usr/local/tomcat7/conf/server.xml{,.bak}

[root@tomcat-2 ~]# vim/usr/local/tomcat7/conf/server.xml

 

124      <Host name="localhost" appBase="webapps"

125            unpackWARs="true" autoDeploy="true">

126            <Context docBase="/webapp" path="" reloadable="flase" >

127            </Context>

 

重新启动tomcat浏览器测试

[root@tomcat-2 ~]# /usr/local/tomcat7/bin/shutdown.sh

[root@tomcat-2 ~]# /usr/local/tomcat7/bin/startup.sh

Tomcat 配置相关说明

/usr/local/tomcat7                 #主目录

bin                                                  #存放windows或linux平台上启动或关闭的Tomcat的脚本文件

conf                                          #存放Tomcat的各种全局配置文件,其中最主要的是server.xml和web.xml

lib                                                   #存放Tomcat运行需要的库文件(JARS)

logs                                           #存放Tomcat执行时的LOG文件

webapps                                      #Tomcat的主要Web发布目录(包括应用程序事例)

work                                              #存放jsp编译后产生的class文件

 

[root@tomcat-1 ~]# ls /usr/local/tomcat7/conf/

catalina.policy                    #权限控制配置文件

catalina.properties           #Tomcat属性配置文件

context.xml                                  #上下文配置文件(selinux)

logging.properties            #日志log相关配置文件

server.xml                            #主配置文件

tomcat-users.xml                  #manager-gui管理用户配置文件(Tomcat安装后生成的管理界面,该文件可开启访问)

web.xml                                        #Tomcat的servlet,servlet-mapping,filter,MIME等相关配置

 

server.xml               #主要配置文件,可修改启动端口,设置网站根目录,虚拟主机,开启https等功能。

 

server.xml的结构构成

<Server>

         <Service>

                   <Connector/>

                            <Engine>

                                     <Host>

                                              <Context> </Context>

                                     </Host>

                            </Engine>

         </Service>

</Server>

 

<!-- -->      内的内容是注视信息

 

Server

Server元素代表了整个Catalina的Servlet容器

 

Service

Service是这样一个集合;它由一个或多个Connector组成,以及一个Engine,负责处理所有Connector所获得的客户请求。

 

Connector

一个Connector在某个指定端口上侦听客户请求,并将获得的请求交给Engine来处理,从Engine处获得回应并返回客户。

 

TomcatEngine有两个典型的Connector,一个直接侦听来自browser的http请求,一个侦听来自其他webserver的请求

Coyote Http/1.1 Connector在端口8009处侦听来自其他wenserver(Apache)的servlet/jsp代理请求。

 

Engine

Engine下可以配置多个虚拟主机VirtualHost,每个虚拟主机都有一个域名

当Engine获得一个请求时,它把该请求匹配到某一个Host上,然后把该请求交给该Host来处理,

Engine有一个默认的虚拟主机,当请求无法匹配到任何一个Host上的时候,将交给该默认Host来处理

 

Host

代表一个Virtual Host,虚拟主机,每个虚拟主机和某个网络域名Domain Name相匹配

每个虚拟主机下都可以部署(deploy)一个或者多个Webapp,每个web app 对应一个Context,有一个Contextpath。

 

当Host获得一个请求时,将把该请求匹配到某个Context上,然后把该请求交给该Context来处理,匹配的方法是最长匹配,所以一个path==“”的Context将成为该Host的默认Context匹配。

 

Context

一个Context对应一个 Webapplication,一个 Webapplication由一个或者多个Servlet组成。

nginx1/2 二台机器都执行监控Nginx进程的脚本

脚本文件

[root@nginx2 keepalived-1.2.13]# vim nginx_pidcheck

 

#!/bin/bash

while :

do

        nginxpid=`ps-C nginx --no-header | wc -l`

        if [$nginxpid -eq 0 ]

        then

               /application/nginx/sbin/nginx

               keeppid=$(ps -C keepalived --no-header | wc -l)

               if [ $keeppid -eq 0 ]

               then

                       /etc/init.d/keepalived start

               fi

               sleep 5

               nginxpid=`ps -C nginx --no-header | wc -l`

               if [ $nginxpid -eq 0 ]

               then

                       /etc/init.d/keepalived stop

               fi

        fi

        sleep 5

done

后台执行监控脚本

[root@nginx1 keepalived-1.2.13]# sh nginx_pidcheck&

[1] 4325

查看端口

[root@nginx1 keepalived-1.2.13]# netstat -anpt |grepnginx

tcp       0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      2730/nginx     

[root@nginx1 keepalived-1.2.13]# killall -s QUITnginx   

[root@nginx1 keepalived-1.2.13]# netstat -anpt |grepnginx

tcp       0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      2730/nginx     

VIP转移测试

[root@nginx1keepalived-1.2.13]# ip addr show dev eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu1500 qdisc pfifo_fast state UP qlen 1000

    link/ether00:0c:29:7a:34:c4 brd ff:ff:ff:ff:ff:ff

    inet192.168.200.101/24 brd 192.168.200.255 scope global eth0

    inet192.168.200.253/32 scope global eth0

    inet6fe80::20c:29ff:fe7a:34c4/64 scope link

       valid_lftforever preferred_lft forever

 

[root@nginx2 keepalived-1.2.13]# service keepalivedstop

停止 keepalived:                                          [确定]

 

[root@nginx1keepalived-1.2.13]# ip addr show dev eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu1500 qdisc pfifo_fast state UP qlen 1000

    link/ether00:0c:29:7a:34:c4 brd ff:ff:ff:ff:ff:ff

    inet 192.168.200.101/24 brd192.168.200.255 scope global eth0

    inet 192.168.200.253/32 scopeglobal eth0

    inet 192.168.200.254/32 scopeglobal eth0

    inet6fe80::20c:29ff:fe7a:34c4/64 scope link

       valid_lftforever preferred_lft forever

部署安装memcached

编译安装

[root@tomcat1 ~]# yum -y install gcc openssl-develpcre-devel zlib-devel

[root@tomcat1 ~]# tar xf libevent-2.0.15-stable.tar.gz

[root@tomcat1 ~]# cd libevent-2.0.15-stable

[root@tomcat1 libevent-2.0.15-stable]# ./configure--prefix=/usr/local/libevent && make && make install

 

[root@tomcat1 libevent-2.0.15-stable]# cd

[root@tomcat1 ~]# tar xf memcached-1.4.5.tar.gz

[root@tomcat1 ~]# cd memcached-1.4.5

[root@tomcat1 memcached-1.4.5]# ./configure--prefix=/usr/local/memcached --with-libevent=/usr/local/libevent/ &&make && make install 

 

[root@tomcat1 memcached-1.4.5]# ldconfig -v | greplibevent

         libevent-1.4.so.2-> libevent-1.4.so.2.1.3

         libevent_extra-1.4.so.2-> libevent_extra-1.4.so.2.1.3

         libevent_core-1.4.so.2-> libevent_core-1.4.so.2.1.3

 

[root@tomcat1 memcached-1.4.5]#  /usr/local/memcached/bin/memcached -u root -m512M -n 10 -f 2 -d -vvv -c 512

/usr/local/memcached/bin/memcached:error while loading shared libraries: libevent-2.0.so.5: cannot open shared objectfile: No such file or directory

 

[root@tomcat1 memcached-1.4.5]# vim /etc/ld.so.conf

include ld.so.conf.d/*.conf

/usr/local/libevent/lib/

[root@tomcat1 memcached-1.4.5]# ldconfig

启动memcached

[root@tomcat1 memcached-1.4.5]# /usr/local/memcached/bin/memcached-u root -m 512M -n 10 -f 2 -d -vvv -c 512

选项:

       -h        #查看帮助信息

       -p        #指定memcached监听的端口号默认11211

       -l        #memcached服务器的ip地址

       -u        #memcached程序运行时使用的用户身份必须是root用户

       -m       #指定使用本机的多少物理内存存数据默认64M

       -c        #memcached服务的最大链接数

       -vvv     #显示详细信息

       -n        #chunk size 的最小空间是多少单位字节

       -f        #chunk size大小增长的倍数默认 1.25

       -d        #在后台启动

另开一个终端测试端口

[root@tomcat1 ~]# netstat -anpt | grep :11211

tcp       0      0 0.0.0.0:11211               0.0.0.0:*                   LISTEN      10316/memcached    

测试memcached 能否存取数据

[root@nginx1 keepalived-1.2.13]# telnet192.168.200.103 11211

Trying 192.168.200.103...

Connected to 192.168.200.103.

Escape character is '^]'.

setusername 0 0 8

zhangsan

STORED

getusername

VALUE username 0 8

zhangsan

END

quit

Connection closed by foreign host.

最后执行让Tomcat-1  Tomcat-2通过(msm)连接到Memcached

将session包中的“*.jar复制到/usr/local/tomcat7/lib/ 下面

[root@tomcat-1 ~]# cp session/*/usr/local/tomcat7/lib/

 

编辑tomcat 配置文件连接指定的  memcached服务器

tomcat-1 和 tomcat-2  配置文件一模一样,都按照一下样例写

[root@tomcat-1 ~]# vim/usr/local/tomcat7/conf/context.xml

<Context>

<Manager   className="de.javakaffee.web.msm.MemcachedBackupSessionManager"

memcachedNodes="memA:192.168.200.103:11211memB:192.168.200.104:11211"

requestUrilgnorePattern=".*\(ico|png|gif|jpg|css|js)$"

transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"

/>

</Context>

 

[root@tomcat-2 ~]# vim/usr/local/tomcat7/conf/context.xml

<Context>

<Manager    className="de.javakaffee.web.msm.MemcachedBackupSessionManager"

memcachedNodes="memB:192.168.200.104:11211memA:192.168.200.103:11211"

requestUrilgnorePattern=".*\(ico|png|gif|jpg|css|js)$"

transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"

/>

</Context>

重启tomcat查询结果

[root@tomcat-1 ~]# /usr/local/tomcat7/bin/shutdown.sh

[root@tomcat-1 ~]# /usr/local/tomcat7/bin/startup.sh

 

如果成功,tomcat与Memcached端口会连在一起,前后有变化

Tomcat-1与Tomcat-2如下图

[root@tomcat1 ~]# netstat -anpt | grep java

tcp       0      0 0.0.0.0:8080                0.0.0.0:*                   LISTEN      10665/java         

tcp       0      0 127.0.0.1:8005              0.0.0.0:*                   LISTEN      10665/java         

tcp       0      0 0.0.0.0:8009                0.0.0.0:*                   LISTEN      10665/java         

tcp       0      0192.168.200.103:31345      192.168.200.103:11211      ESTABLISHED 10665/java         

tcp       0      0192.168.200.103:31341      192.168.200.103:11211       ESTABLISHED10665/java         

tcp       0      0192.168.200.103:55980      192.168.200.104:11211      ESTABLISHED 10665/java         

tcp       0      0192.168.200.103:55978      192.168.200.104:11211      ESTABLISHED 10665/java                 

[root@tomcat1 ~]# netstat -anpt | grep :11211

tcp       0      0 0.0.0.0:11211               0.0.0.0:*                   LISTEN      10605/memcached    

tcp       0      0192.168.200.103:31345      192.168.200.103:11211      ESTABLISHED 10665/java          

tcp       0      0192.168.200.103:11211      192.168.200.103:31341      ESTABLISHED 10605/memcached    

tcp       0      0192.168.200.103:31341      192.168.200.103:11211      ESTABLISHED 10665/java         

tcp       0      0 192.168.200.103:55980       192.168.200.104:11211       ESTABLISHED 10665/java         

打开浏览器测试

由于权重相同,页面会反复切换one/two

远程同步备份

rsync命令的基本用法:

格式:rsync 【选项】  源文件     目标文件

常见的选项:

-a, --archive(存档)                            归档模式,表示以递归的方式传输文件,并且保持文件属性,等同于加了参数-rlptgoD

-r,  --recursive                                   对子目录以递归模式处理

-l,  --links                                             表示拷贝链接文件

-p, --perms                                         表示保持文件原有权限

-t,  --times                                           表示保持文件原有时间

-g, --group                                          表示保持文件原有属用户组

-o, --owner                                         表示保持文件原有属主

-D,          --devices                                       表示块设备文件信息

-z, --compress                                   表示压缩传输

-H                                                             表示硬链接文件

-A                                                             保留ACL属性信息

-P                                                              显示传输进度

-u      --update                                        仅仅进行更新,也就是跳过所有已经存在于目标位置,并且文件时间晚于要备份的文件。(不覆盖更新的文件)

--port=PORT                                          指定其他的rsyncc服务端口 873

--delete                                                   删除哪些目标位置有原始位置没有的文件

--password-file=FILE                           从FILE中得到密码

--bwlimit=KBPS                                     限制I/O带宽,Kbytes/second

--filter "-文件名"                                 需要过滤的文件

--exclude= :                                            需要过滤的文件

-v                                                              显示同步过程的详细信息

环境

tomcat1            同步源                       192.168.200.103

tomcat2                    同步目标                   192.168.200.104

同步tomcat1的/webapp到tomcat2的/webapp

安装rsync

[root@tomcat1 ~]# rpm -qf `which rsync`

rsync-3.0.6-9.el6_4.1.x86_64

[root@tomcat1 ~]# rpm -ivh/media/cdrom/Packages/rsync-3.0.6-9.el6_4.1.x86_64.rpm

Preparing...               ########################################### [100%]

         packagersync-3.0.6-9.el6_4.1.x86_64 is already installed

安装xinetd服务来管理rsync服务

[root@tomcat1 ~]# rpm -ivh/media/cdrom/Packages/xinetd-2.3.14-39.el6_4.x86_64.rpm

Preparing...               ########################################### [100%]

   1:xinetd                ########################################### [100%]

开启rsync服务

[root@tomcat1 ~]# vim /etc/xinetd.d/rsync

修改:  disable = yes

  为:  disable = no

 

[root@tomcat1 ~]# /etc/init.d/xinetd restart

停止 xinetd:                                              [失败]

正在启动 xinetd:                                          [确定]

查看是否支持inotify,从kernel2.6.13开始正式并入内核。

[root@tomcat1 ~]# uname -r

2.6.32-431.el6.x86_64

 

[root@tomcat1 ~]# ll /proc/sys/fs/inotify/

总用量 0

-rw-r--r-- 1 root root 0 4月  16 19:09 max_queued_events

-rw-r--r-- 1 root root 0 4月  16 19:09 max_user_instances

-rw-r--r-- 1 root root 0 4月  16 19:09 max_user_watches

注:在linux内核中,默认的inotify机制提供了三个调控参数

max_queued_events               #表示监控事件队列

max_user_instances                #表示最多监控实例数

max_user_watches                  #表示每个实例最多监控文件数

 

查看具体参数值

[root@tomcat1 ~]# cat/proc/sys/fs/inotify/max_queued_events

16384

[root@tomcat1 ~]# cat /proc/sys/fs/inotify/max_user_instances

128

[root@tomcat1 ~]# cat/proc/sys/fs/inotify/max_user_watches

8192

注:当要监控的目录、文件数量较多或者变化较频繁时,要加大这三个参数的值

修改/etc/sysctl.conf配置文件

[root@tomcat1~]# vim /etc/sysctl.conf

文件最后添加以下内容

fs.inotify.max_queued_events = 32768

fs.inotify.max_instances = 1024

fs.inotify.max_user_watches = 90000000

 

[root@tomcat1 ~]# sysctl -p                                               #修改后使sysctl.conf文件生效

[root@tomcat1 ~]# cat/proc/sys/fs/inotify/max_user_watches   #查看是否修改成功

90000000

做ssh密钥认证

[root@tomcat1 ~]# ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key(/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:         #回车

Your identification has been saved in/root/.ssh/id_rsa.

Your public key has been saved in/root/.ssh/id_rsa.pub.

The key fingerprint is:                                #回车

9f:bf:bd:15:e0:dd:07:63:6c:08:43:4c:a7:c9:d6:e2root@tomcat1

The key's randomart image is:                #回车

+--[ RSA 2048]----+

|         +=.    |

|         ..Bo   |

|          *o.*  |

|         o..ooo.|

|        SE  . oo|

|         ..    o|

|         o     .|

|           . .. |

|           o.o. |

+-----------------+

[root@tomcat1 ~]# ssh-copy-id root@192.168.200.104

The authenticity of host '192.168.200.104(192.168.200.104)' can't be established.

RSA key fingerprint is98:22:6a:f2:64:d6:e8:98:b7:c9:7b:58:b7:03:a8:9b.

Are you sure you want to continue connecting (yes/no)?yes

Warning: Permanently added '192.168.200.104' (RSA) tothe list of known hosts.

root@192.168.200.104's password:          #输入主机tomcat2root密码

Now try logging into the machine, with "ssh'root@192.168.200.104'", and check in:

 

 .ssh/authorized_keys

 

to makesure we haven't added extra keys that you weren't expecting.   #成功标志

 

测试,不输入密码直接登录和备份

[root@tomcat1 ~]# mkdir /xiaoyi

[root@tomcat1 ~]# cd /xiaoyi/

 [root@tomcat1xiaoyi]# touch wolf.txt

[root@tomcat1 xiaoyi]# ls

wolf.txt

[root@tomcat1 xiaoyi]# cd

 

[root@tomcat1 ~]# rsync -azP  /xiaoyi root@192.168.200.104:/opt

sending incremental file list

xiaoyi/

xiaoyi/wolf.txt

           0100%    0.00kB/s    0:00:00 (xfer#1, to-check=0/2)

 

sent 97 bytes received 35 bytes  88.00 bytes/sec

total size is 0 speedup is 0.00

 

tomcat2上测试

[root@tomcat2 ~]# ls /opt/xiaoyi/

wolf.txt

安装inotify-tools

安装inotify-tools后,将拥有inotifywait、inotifywatch辅助工具程序,从而来监控、汇总文件系统改动情况

[root@tomcat1 ~]# tar xf inotify-tools-3.14.tar.gz

[root@tomcat1 ~]# cd inotify-tools-3.14

[root@tomcat1 ~]# ./configure && make&& make install

 

测试

使用inotifywait命令监控网站目录/webapp发生的变化。然后在另一个终端向/webapp目录下添加、移动文件,查看屏幕输出结果。

[root@tomcat1 ~]# inotifywait -mrq -ecreate,move,delete,modify,attrib /webapp/

常用参数:

-e    用来指定要监控哪些事件

这些事件包括:create创建、move移动、delete删除、modify修改文件内容,attrib属性更改

-m  标识持续监控

-r    标识递归整个目录

-q   标识简化输出信息

编写触发式同步脚本

[root@tomcat1 ~]# vim a.sh

#!/bin/bash

inotifywait -mrq -ecreate,move,delete,modify,attrib /webapp/  | while read a b c

do

        rsync-azP /webapp/ root@192.168.200.104:/webapp/

done

测试

执行脚本

[root@tomcat1 ~]# bash a.sh

 

另开一个终端

[root@tomcat1 ~]# cd /webapp/

[root@tomcat1 webapp]# mkdir a

 

查看变化

[root@tomcat1 ~]# bash a.sh

sending incremental file list

./

index.jsp

         179100%    0.00kB/s    0:00:00 (xfer#1, to-check=2/4)

a/

test/

 

sent 251 bytes received 48 bytes  598.00bytes/sec

total size is 179 speedup is 0.60

 

查看tomcat2上的变化

[root@tomcat2 webapp]# ls

a  index.jsp  test