ActiveMQ教程(二)- 集群

来源:互联网 发布:uwp开发编程语言 编辑:程序博客网 时间:2024/05/22 15:12

基础设施

  • zookeeper
    程序协调服务框架,用于自动调度多个activemq。在某个activemq服务宕机后,zookeeper会自动调度集群中其中一个正常的activemq服务成为master主机继续服务
  • activemq
    消息队列框架,我们会部署多个activemq服务

规划

假设我们有三台服务器:

  • 192.168.0.200
    部署activemq-master、zookeeper
  • 192.168.0.201
    部署activemq-slave01
  • 192.168.0.202
    部署activemq-slave02

集群搭建

配置zookeeper

  • 解压zookeeper-3.4.9.tar.gz
$ tar -zxvf zookeeper-3.4.9.tar.gz
  • 通过实例创建zookeeper配置文件
$ cp -R zoo_sample.cfg /home/zookeeper-3.4.9/conf/zoo.cfg
  • 配置zookeeper-3.4.9/conf/zoo.cfg,内容如下:
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/tmp/zookeeper# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1

关键配置说明:
dataDir:zookeeper数据存放目录
dataLogDir:zookeeper保存日志文件的目录
clientPort:zookeeper端口
tickTime:zookeeper服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。tickTime以毫秒为单位
initLimit:集群中的follower服务器(F)与leader服务器(L)之间初始连接时能容忍的最多心跳数(tickTime的数量)
syncLimit:集群中的follower服务器与leader服务器之间请求和应答之间能容忍的最多心跳数(tickTime的数量)
server.N=YYY:A:B:服务器名称与地址:集群信息(服务器编号,服务器地址,LF通信端口,选举端口),这个配置项的书写格式比较特殊。其中N表示服务器编号,YYY表示服务器的IP地址,A为LF通信端口,表示该服务器与集群中的leader交换的信息的端口。B为选举端口,表示选举新 leader时服务器间相互通信的端口(当leader挂掉时,其余服务器会相互通信,选择出新的leader)。一般来说,集群中每个服务器的A端口都 是一样,每个服务器的B端口也是一样。但是当所采用的为伪集群时,IP地址都一样,只能时A端口和B端口不一样

配置activemq

activemq配置,会按照规划配置3个服务,并统统交由zookeeper进行调度

  • 安装
    此处,不在赘述activemq的安装,详见《ActiveMQ教程(一)-安装 》:http://blog.csdn.net/eugeneheen/article/details/55190552

  • 配置核心配置文件/home/apache-activemq-5.9.1-bin/activemq.xml

<beans  xmlns="http://www.springframework.org/schema/beans"  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd  http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">    <!-- Allows us to use system properties as variables in this configuration file -->    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">        <property name="locations">            <value>file:${activemq.conf}/credentials.properties</value>        </property>    </bean>   <!-- Allows accessing the server log -->    <bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"          lazy-init="false" scope="singleton"          init-method="start" destroy-method="stop">    </bean>    <!--        The <broker> element is used to configure the ActiveMQ broker.    -->    <broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker-test" dataDirectory="${activemq.data}">        <destinationPolicy>            <policyMap>              <policyEntries>                <policyEntry topic=">" >                    <!-- The constantPendingMessageLimitStrategy is used to prevent                         slow topic consumers to block producers and affect other consumers                         by limiting the number of messages that are retained                         For more information, see:                         http://activemq.apache.org/slow-consumer-handling.html                    -->                  <pendingMessageLimitStrategy>                    <constantPendingMessageLimitStrategy limit="1000"/>                  </pendingMessageLimitStrategy>                </policyEntry>              </policyEntries>            </policyMap>        </destinationPolicy>        <!--            The managementContext is used to configure how ActiveMQ is exposed in            JMX. By default, ActiveMQ uses the MBean server that is started by            the JVM. For more information, see:            http://activemq.apache.org/jmx.html        -->        <managementContext>            <managementContext createConnector="false"/>        </managementContext>        <!--            Configure message persistence for the broker. The default persistence            mechanism is the KahaDB store (identified by the kahaDB tag).            For more information, see:            http://activemq.apache.org/persistence.html        -->        <persistenceAdapter>            <!--<kahaDB directory="${activemq.data}/kahadb"/>-->        <replicatedLevelDB directory="${activemq.data}/kahadb"         replicas="3"        bind="tcp://0.0.0.0:0"        zkAddress="192.168.0.201:2181"        zkSessionTimeout="10s"        hostname="192.168.0.201"        zkPath="/home/leveldb-stores"        />        </persistenceAdapter>          <!--            The systemUsage controls the maximum amount of space the broker will            use before disabling caching and/or slowing down producers. For more information, see:            http://activemq.apache.org/producer-flow-control.html          -->          <systemUsage>            <systemUsage>                <memoryUsage>                    <memoryUsage percentOfJvmHeap="70" />                </memoryUsage>                <storeUsage>                    <storeUsage limit="100 gb"/>                </storeUsage>                <tempUsage>                    <tempUsage limit="50 gb"/>                </tempUsage>            </systemUsage>        </systemUsage>        <!--            The transport connectors expose ActiveMQ over a given protocol to            clients and other brokers. For more information, see:            http://activemq.apache.org/configuring-transports.html        -->        <transportConnectors>            <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->            <transportConnector name="openwire" uri="tcp://0.0.0.0:51511?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>            <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>            <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>            <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>            <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>        </transportConnectors>        <!-- destroy the spring context on shutdown to stop jetty -->        <shutdownHooks>            <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />        </shutdownHooks>    </broker>    <!--        Enable web consoles, REST and Ajax APIs and demos        The web consoles requires by default login, you can disable this in the jetty.xml file        Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details    -->    <import resource="jetty.xml"/></beans>

配置详细说明:
a、统一设置brokerName
所有activemq服务,必须全部配置brokerName为统一值

<broker xmlns="http://activemq.apache.org/schema/core" brokerName="broker-test" dataDirectory="${activemq.data}">

b、配置persistenceAdapter
persistenceAdapter设置持久化方式,主要有三种方式:kahaDB(默认方式)、数据库持久化、levelDB(activemq v5.9.0提供支持)

 <persistenceAdapter>     <!--<kahaDB directory="${activemq.data}/kahadb"/>-->    <replicatedLevelDB directory="${activemq.data}/kahadb"     replicas="3"    bind="tcp://0.0.0.0:0"    zkAddress="192.168.0.201:2181"    zkSessionTimeout="10s"    hostname="192.168.0.201"    zkPath="/home/leveldb-stores"    /></persistenceAdapter>

directory:存储数据的路径
replicas:集群中的节点数,以(replicas/2)+1公式表示集群中至少要正常运行的服务数量】, 3台集群那么允许1台宕机, 另外两台要正常运行
bind:当这个服务节点成为Master, 它会绑定配置好的地址和端口来履行主从复制协议(配置为各activemq服务所在的ip,保证端口不同即可)
zkAddress:zookeeper的ip和port,如果是zookeeper集群以”,”隔开
zkSessionTimeout:zookeeper回话的超时时间
zkPassword:zookeeper服务平台的密码
hostname:activemq服务部署机器的ip
zkPath:zookeeper选举信息交换的存贮路径
sync:认为消息被消费完成前, 同步信息所存贮的策略, 如果有多种策略用逗号隔开, ActiveMQ会选择较强的策略(local_mem, local_disk则肯定选择存贮在本地硬盘)

c、配置transportConnector消息端口

<transportConnectors>    <transportConnector name="openwire" uri="tcp://0.0.0.0:51511?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/></transportConnectors>

保证各activemq服务的消息端口不一致即可,比如三台activemq服务端口分别配置为:51511、51512、51513

  • 配置activemq管理平台访问端口/home/apache-activemq-5.9.1-bin/jetty.xml
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">        <property name="host" value="0.0.0.0"/>        <property name="port" value="8161"/></bean>

property属性port的value值改为你想配置的端口

启动服务

  • 启动zookeeper
    zookeeper服务一定要优于activemq服务启动
$ {zookeeper_home}/bin/zkServer.sh start
  • 启动activemq
    依次启动activemq服务
$ {activemq_home}/bin/active start

管理activemq

zookeeper的策略, 从三台activemq服务器选一台运行, 其他两台等待运行, 只是做数据上的主从同步。

所以, 启动ZooKeeper服务器和ActiveMQ服务器后, 访问http://192.168.0.200.1:8161/admin/ 、 http://192.168.0.201:8161/admin/、 http://192.168.0.202:8161/admin/ 只会有一个成功

客户端连接activemq

使用Spring管理Bean,配置如下:

 <bean id="activeMQConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">        <property name="userName" value="admin" />        <property name="password" value="admin" />        <property name="brokerURL" value="failover:(tcp://192.168.0.201:51511,tcp://192.168.0.201:51512,tcp://192.168.0.201:51513)?initialReconnectDelay=100" /></bean>
0 0
原创粉丝点击