zookeeper+kafka环境搭建

来源:互联网 发布:视频特效软件手机版 编辑:程序博客网 时间:2024/05/20 04:12

步骤:首先安装java环境、安装zookeeper,最后安装kafka

一、安装java环境

1、下载所需要的软件

[root@linux-node2 src]# cd /usr/local/src

(所需要的软件有jdk-8u121-linux-x64.rpmzookeeper-3.4.10.tar.gzkafka_2.10-0.10.1.0.tgz)

 

备注:

这些软件可自行到官网上下载。最新版本的比较高的zookeeper需要最新版本的jdk,我这里用的是jdk8

2、安装jdk,并声明java的家目录

[root@linux-node2 src]# rpm -ivh jdk-8u121-linux-x64.rpm

[root@linux-node2 src]# echo 'export JAVA_HOME="/usr/java/jdk1.8.0_121"' >> /etc/profile

[root@linux-node2 src]# source /etc/profile

 

二、配置安装zookeeper

[root@linux-node2 ~]# mkdir -p /zookeeper/{zk1,zk2,zk3}   #创建zookeeper的安装目录

1、配置安装第一个zookeeper

[root@linux-node2 ~]# tar -xf /usr/local/src/zookeeper-3.4.10.tar.gz -C /zookeeper/zk1/

[root@linux-node2 ~]# cd /zookeeper/zk1/zookeeper-3.4.10/conf/

[root@linux-node2 conf]# cp zoo_sample.cfg zoo.cfg  #zookeeper主配置文件

[root@linux-node2 conf]# grep '^[a-Z]' /zookeeper/zk1/zookeeper-3.4.10/conf/zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/zookeeper/zk1/zookeeper-3.4.10/dataPath

clientPort=2181

server.1=127.0.0.1:8880:7770  

server.2=127.0.0.1:8881:7771

server.3=127.0.0.1:8882:7772

备注(这里是在一台机器上搭建的环境)

1server字段里面的第一个端口(8880,8881,8882)为选举端口,第二端口(7770,7771,7772)是为了防止网络异常的端口,都是自定义的

2dataDir数据路径一会要创建出来

3、客户端的端口,每一个zookeeper也不同

4、如果是在不同服务器上搭建这个环境,只需要修改server字段里面的IP即可。

 

#创建数据文件

[root@linux-node2 conf]# mkdir -p /zookeeper/zk1/zookeeper-3.4.10/dataPath

[root@linux-node2 conf]# echo 1 > /zookeeper/zk1/zookeeper-3.4.10/dataPath/myid

备注:这是个规定,而且myid不同的zookeeper不能相同

 

2、配置安装第二个zookeeper

[root@linux-node2 ~]# cp -a /zookeeper/zk1/* /zookeeper/zk2/

[root@linux-node2 ~]# echo 2 > /zookeeper/zk2/zookeeper-3.4.10/dataPath/myid

[root@linux-node2 ~]# grep '^[a-Z]' /zookeeper/zk2/zookeeper-3.4.10/conf/zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/zookeeper/zk2/zookeeper-3.4.10/dataPath

clientPort=2182

server.1=127.0.0.1:8880:7770  

server.2=127.0.0.1:8881:7771

server.3=127.0.0.1:8882:7772

 

3、配置安装第三个zookeeper

[root@linux-node2 ~]# cp -a /zookeeper/zk1/* /zookeeper/zk3/

[root@linux-node2 ~]# echo 3 > /zookeeper/zk3/zookeeper-3.4.10/dataPath/myid

[root@linux-node2 ~]# grep '^[a-Z]' /zookeeper/zk3/zookeeper-3.4.10/conf/zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/zookeeper/zk3/zookeeper-3.4.10/dataPath

clientPort=2183

server.1=127.0.0.1:8880:7770  

server.2=127.0.0.1:8881:7771

server.3=127.0.0.1:8882:7772

 

4启动zookeeper

[root@linux-node2 ~]# /zookeeper/zk1/zookeeper-3.4.10/bin/zkServer.sh start

[root@linux-node2 ~]# /zookeeper/zk2/zookeeper-3.4.10/bin/zkServer.sh start

[root@linux-node2 ~]# /zookeeper/zk3/zookeeper-3.4.10/bin/zkServer.sh start

 

 

备注:如果有报错可以查看文件:zookeeper.out

 

5、查看是否成功

[root@linux-node2 ~]# /zookeeper/zk1/zookeeper-3.4.10/bin/zkCli.sh -server 127.0.0.1:2181

或者

[root@linux-node2 ~]# /zookeeper/zk1/zookeeper-3.4.10/bin/zkCli.sh -server 127.0.0.1:2182

或者

[root@linux-node2 ~]# /zookeeper/zk2/zookeeper-3.4.10/bin/zkCli.sh -server 127.0.0.1:2183

 

 

[zk: 127.0.0.1:2181(CONNECTED) 0] ls

[zk: 127.0.0.1:2181(CONNECTED) 1] ls /zookeeper/quota

[]

[zk: 127.0.0.1:2181(CONNECTED) 2] ls /zookeeper/quota

[]

[zk: 127.0.0.1:2181(CONNECTED) 3] get /zookeeper/quota

 

cZxid = 0x0

ctime = Thu Jan 01 08:00:00 CST 1970

mZxid = 0x0

mtime = Thu Jan 01 08:00:00 CST 1970

pZxid = 0x0

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x0

dataLength = 0

numChildren = 0

 

备注:测试的时候可以自由搭配,没有固定的搭配方法

 

三、配置安装kafka

1、添加一个kafka实例

[root@linux-node2 ~]# mkdir -p /kafka/{ka1,ka2,ka3}

[root@linux-node2 ~]# tar -xf /usr/local/src/kafka_2.10-0.10.1.0.tgz -C /kafka/ka1/

[root@linux-node2 ~]#mkdir /kafka/ka1/kafka_2.10-0.10.1.0/logs

[root@linux-node2 config]# cd /kafka/ka1/kafka_2.10-0.10.1.0/config

[root@linux-node2 config]# grep "^[a-Z]" server.properties

broker.id=0

listeners=PLAINTEXT://:9092

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/kafka/ka1/kafka_2.10-0.10.1.0/logs

num.partitions=1

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181,localhost:2182,localhost:2183

zookeeper.connection.timeout.ms=6000

 

 

2、添加第二个kafka实例

[root@linux-node2 ~]# cp -a /kafka/ka1/* /kafka/ka2/

[root@linux-node2 config]# cd /kafka/ka2/kafka_2.10-0.10.1.0/config

[root@linux-node2 config]# grep "^[a-Z]" server.properties

broker.id=1

listeners=PLAINTEXT://:9093

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/kafka/ka2/kafka_2.10-0.10.1.0/logs

num.partitions=1

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181,localhost:2182,localhost:2183

zookeeper.connection.timeout.ms=6000

 

 

3、添加第kafka实例

[root@linux-node2 ~]# cp -a /kafka/ka1/* /kafka/ka3/

[root@linux-node2 config]# cd /kafka/ka3/kafka_2.10-0.10.1.0/config

[root@linux-node2 config]# grep "^[a-Z]" server.properties

broker.id=2

listeners=PLAINTEXT://:9094

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/kafka/ka3/kafka_2.10-0.10.1.0/logs

num.partitions=1

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181,localhost:2182,localhost:2183

zookeeper.connection.timeout.ms=6000

 

4启动kafka

节点1

[root@linux-node2 ~]# /kafka/ka1/kafka_2.10-0.10.1.0/bin/kafka-server-start.sh /kafka/ka1/kafka_2.10-0.10.1.0/config/server.properties &

 

节点2

[root@linux-node2 ~]# /kafka/ka2/kafka_2.10-0.10.1.0/bin/kafka-server-start.sh /kafka/ka2/kafka_2.10-0.10.1.0/config/server.properties &

 

节点3

[root@linux-node2 ~]# /kafka/ka3/kafka_2.10-0.10.1.0/bin/kafka-server-start.sh /kafka/ka3/kafka_2.10-0.10.1.0/config/server.properties &

 

查看进程状态:

[root@linux-node2 bin]# ps -ef | grep kafka

 

 


0 0