Zookeeper 最新版本集群搭建部署
来源:互联网 发布:改变 知乎 编辑:程序博客网 时间:2024/06/03 21:52
1,下载
下载地址:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.10/,目前最新稳定版本是3.4.10,我们可以直接wget下载这个(当然了也可以考虑稳定点的3.4.6)
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
2,文件夹配置
创建根目录:mkdir /zkcluster,下面依照端口创建3个文件夹
[root@xxx__121_71 zkcluster]# pwd
/zkcluster
[root@xxx__121_71 zkcluster]# ll
total 12
drwxr-xr-x. 2 root root 4096 Apr 13 21:43 zk3001
drwxr-xr-x. 2 root root 4096 Apr 13 21:43 zk3002
drwxr-xr-x. 2 root root 4096 Apr 13 21:43 zk3003
[root@xxx__121_71 zkcluster]#
Zk3001下面放zookeeper文件压缩包。
# 解压缩
[root@xxx__121_71 zkcluster]#tar -xvf zookeeper-3.4.10.tar.gz -C zk3001
[root@xxx__121_71 zkcluster]#tar -xvf zookeeper-3.4.10.tar.gz -C zk3002
[root@xxx__121_71 zkcluster]#tar -xvf zookeeper-3.4.10.tar.gz -C zk3003
# 查看解压缩目录,zk3001、zk3002、zk3003都是一模一样的,这里还需要再次做下mv,因为zk3001下面还有一个zookeeper3.4.10目录
[root@xxx__121_71 zkcluster]# mv zk3001/zookeeper-3.4.10/* zk3001
[root@xxx__121_71 zkcluster]# ll zk3001/zookeeper-3.4.10
total 0
[root@xxx__121_71 zkcluster]# rm -rf zk3001/zookeeper-3.4.10
[root@xxx__121_71 zkcluster]# ll zk3001
total 1592
drwxr-xr-x. 2 1001 1001 4096 Mar 23 19:27 bin
-rw-rw-r--. 1 1001 1001 84725 Mar 23 18:14 build.xml
drwxr-xr-x. 2 1001 1001 4096 Mar 23 19:27 conf
drwxr-xr-x. 10 1001 1001 4096 Mar 23 18:14 contrib
drwxr-xr-x. 2 1001 1001 4096 Mar 23 19:36 dist-maven
drwxr-xr-x. 6 1001 1001 4096 Mar 23 19:27 docs
-rw-rw-r--. 1 1001 1001 1709 Mar 23 18:14 ivysettings.xml
-rw-rw-r--. 1 1001 1001 5691 Mar 23 18:14 ivy.xml
drwxr-xr-x. 4 1001 1001 4096 Mar 23 19:27 lib
-rw-rw-r--. 1 1001 1001 11938 Mar 23 18:14 LICENSE.txt
-rw-rw-r--. 1 1001 1001 3132 Mar 23 18:14 NOTICE.txt
-rw-rw-r--. 1 1001 1001 1770 Mar 23 18:14 README_packaging.txt
-rw-rw-r--. 1 1001 1001 1585 Mar 23 18:14 README.txt
drwxr-xr-x. 5 1001 1001 4096 Mar 23 18:14 recipes
drwxr-xr-x. 8 1001 1001 4096 Mar 23 19:27 src
drwxr-xr-x. 2 1001 1001 4096 Apr 14 22:54 zookeeper-3.4.10
-rw-rw-r--. 1 1001 1001 1456729 Mar 23 19:24 zookeeper-3.4.10.jar
-rw-rw-r--. 1 1001 1001 819 Mar 23 19:28 zookeeper-3.4.10.jar.asc
-rw-rw-r--. 1 1001 1001 33 Mar 23 19:24 zookeeper-3.4.10.jar.md5
-rw-rw-r--. 1 1001 1001 41 Mar 23 19:24 zookeeper-3.4.10.jar.sha1
[root@xxx__121_71 zkcluster]# ll zk3001/zookeeper-3.4.10
total 0
[root@xxx__121_71 zkcluster]# rm -rf zk3001/zookeeper-3.4.10
[root@xxx__121_71 zkcluster]#
zk3002、zk3003依次做同样的操作。
3,配置zoo.cfg文件
已经解压缩了3个文件夹,现在有3个zookeeper文件包,意味着就是有3个zookeeper程序。
3.1 Zk3001配置
#(1)准备zoo.cfg文件
cd zk3001/conf/
cp zoo_sample.cfg zoo.cfg
#(2)准备数据目录、日志目录
mkdir -p /zkcluster/zk3001/data
mkdir -p /zkcluster/zk3001/logs
#(3)zoo.cfg的内容如下
[root@xxx__121_71 conf]# vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/zkcluster/zk3001/data
dataDirLog=/zkcluster/zk3001/logs
# the port at which the clients will connect
clientPort=3001
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.121.71:2888:3888
server.2=192.168.121.71:2889:3889
server.3=192.168.121.71:2890:3890
PS:标识红色的部分就是修改的地方,指定zookeeper的data和log文件夹,指定clientport访问的端口和servers的列表。
3.2 zk3002配置
#(1)准备zoo.cfg文件
cd /zkcluster/zk3002/conf/
cp zoo_sample.cfg zoo.cfg
#(2)准备数据目录、日志目录
mkdir -p /zkcluster/zk3002/data
mkdir -p /zkcluster/zk3002/logs
#(3)zoo.cfg的内容如下
[root@xxx__121_71 conf]# vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/zkcluster/zk3002/data
dataDirLog=/zkcluster/zk3002/logs
# the port at which the clients will connect
clientPort=3002
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.121.71:2888:3888
server.2=192.168.121.71:2889:3889
server.3=192.168.121.71:2890:3890
PS:标识红色的部分就是修改的地方,指定zookeeper的data和log文件夹,指定clientport访问的端口和servers的列表。
3.3 zk3003配置
#(1)准备zoo.cfg文件
cd /zkcluster/zk3003/conf/
cp zoo_sample.cfg zoo.cfg
#(2)准备数据目录、日志目录
mkdir -p /zkcluster/zk3003/data
mkdir -p /zkcluster/zk3003/logs
#(3)zoo.cfg的内容如下
[root@xxx__121_71 conf]# vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/zkcluster/zk3003/data
dataDirLog=/zkcluster/zk3003/logs
# the port at which the clients will connect
clientPort=3003
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.121.71:2888:3888
server.2=192.168.121.71:2889:3889
server.3=192.168.121.71:2890:3890
PS:标识红色的部分就是修改的地方,指定zookeeper的data和log文件夹,指定clientport访问的端口和servers的列表。
4,生成pid文件
在/zkcluster zk3001/conf/zoo.cnf里面server列表有server.1/server.2/server.3这三个字符串,生成pid文件的内容就取决于此。
(1)Zk3001的pid文件
[root@xxx__121_71 zk3001]# cd /zkcluster/zk3001/data
[root@xxx__121_71 data]# vim myid
[root@xxx__121_71 data]# more myid
1
[root@xxx__121_71 data]#
(2)Zk3002的pid文件
[root@xxx__121_71 zk3001]# cd /zkcluster/zk3002/data
[root@xxx__121_71 data]# vim myid
[root@xxx__121_71 data]# more myid
2
[root@xxx__121_71 data]#
(3)Zk3002的pid文件
[root@xxx__121_71 zk3002]# cd /zkcluster/zk3003/data
[root@xxx__121_71 data]# vim myid
[root@xxx__121_71 data]# more myid
3
[root@xxx__121_71 data]#
5,启动zookeeper服务
(1)/zkcluster/zk3001/bin/zkServer.sh start-foreground &
(2)/zkcluster/zk3002/bin/zkServer.sh start-foreground &
(3)/zkcluster/zk3003/bin/zkServer.sh start-foreground &
6,查看zookeeper角色
(1)查看3001端口
[root@xxx__121_71 bin]# /zkcluster/zk3001/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zkcluster/zk3001/bin/../conf/zoo.cfg
2017-04-15 00:05:24,223 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:3001:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:40468
2017-04-15 00:05:24,224 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:3001:NIOServerCnxn@883] - Processing srvr command from /127.0.0.1:40468
2017-04-15 00:05:24,226 [myid:1] - INFO [Thread-2:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:40468 (no session established for client)
Mode: follower
(2)查看3002端口
[root@xxx__121_71 bin]# /zkcluster/zk3002/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zkcluster/zk3002/bin/../conf/zoo.cfg
2017-04-15 00:05:28,704 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:3002:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:37557
2017-04-15 00:05:28,705 [myid:2] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:3002:NIOServerCnxn@883] - Processing srvr command from /127.0.0.1:37557
2017-04-15 00:05:28,706 [myid:2] - INFO [Thread-4:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:37557 (no session established for client)
Mode: leader
# 这个leader是主角色也就是zookeeper的主服务。
(3)查看3003端口
[root@xxx__121_71 bin]# /zkcluster/zk3003/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /zkcluster/zk3003/bin/../conf/zoo.cfg
2017-04-15 00:05:33,399 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:3003:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:31395
2017-04-15 00:05:33,400 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:3003:NIOServerCnxn@883] - Processing srvr command from /127.0.0.1:31395
2017-04-15 00:05:33,403 [myid:3] - INFO [Thread-2:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:31395 (no session established for client)
Mode: follower
[root@xxx__121_71 bin]#
参考文档下载地址
https://github.com/MyCATApache/Mycat-doc/blob/master/%E8%AE%BE%E8%AE%A1%E6%96%87%E6%A1%A3/2.0/Mycat%20ZK%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E8%AF%A6%E8%A7%A3.docx
- Zookeeper 最新版本集群搭建部署
- Kafka集群搭建01-Zookeeper 集群部署
- Spark集群基于Zookeeper的HA搭建部署笔记
- Spark集群基于Zookeeper的HA搭建部署
- Spark集群基于Zookeeper的HA搭建部署
- Spark集群基于Zookeeper的HA搭建部署笔记
- Spark集群基于Zookeeper的HA搭建部署
- windows ZooKeeper 集群部署
- Zookeeper分布式集群部署
- Zookeeper分布式集群部署
- Zookeeper集群部署
- Zookeeper集群配置部署
- zookeeper集群部署
- Zookeeper集群的部署
- ZooKeeper集群部署流程
- ZooKeeper集群环境部署
- Zookeeper分布式集群部署
- zookeeper集群部署---
- 实现自己SpringMVC的RequestMapping
- python 学习笔记—— #(井号)的作用
- java 基本数据类型初始值(默认值)
- git 使用中发现的问题
- jdk环境配置文档
- Zookeeper 最新版本集群搭建部署
- ZOJ-1259-Rails
- C++虚函数原理
- 哈夫曼树
- Windows下用C/C++精确到微秒的计时方法
- 输入一个不多于5位的正整数,判断它是几位数,并逆序输出各位数字
- 简述CPU、内存、硬盘与指令之间的关系
- 纪念我的第一次问题回溯反思-2016-10-19
- Could not connect to '127.0.0.1' (port 22): Connection failed.