Kafka安装配置测试

来源:互联网 发布:mac重复文件清理器 编辑:程序博客网 时间:2024/04/30 01:58

原文链接:http://lxw1234.com/archives/2015/09/510.htm

摘录链接:https://www.kancloud.cn/digest/bigdata-open/129774


Kafka的整体架构:

kafka

本文中的配置:

kafka

在两台机器Node1和Node2上,分别部署了两个broker,Zookeeper使用的是单独的ZK集群。

在每个机器上下载并解压kafka_2.10-0.8.2.1

http://kafka.apache.org/downloads.html

Kafka配置

Node1:

ip为 172.16.212.17

  1. cd $KAFKA_HOME/config
  2. cp server.properties server1.properties
  3. cp server.properties server2.properties
  4.  
  5. vi server1.properties 修改以下参数:
  6. broker.id=1
  7. port=9091
  8. host.name=172.16.212.17
  9. log.dirs=/tmp/kafka-logs/broker1/
  10. zookeeper.connect=zk1:2181,zk2:2181,zk3:2181
  11.  
  12. vi server2.properties 修改以下参数:
  13. broker.id=2
  14. port=9092
  15. host.name=172.16.212.17
  16. log.dirs=/tmp/kafka-logs/broker2/
  17. zookeeper.connect=zk1:2181,zk2:2181,zk3:2181

Node2:

ip为 172.16.212.102

  1. cd $KAFKA_HOME/config
  2. cp server.properties server3.properties
  3. cp server.properties server4.properties
  4.  
  5. vi server1.properties 修改以下参数:
  6. broker.id=3port=9091
  7. host.name=172.16.212.102
  8. log.dirs=/tmp/kafka-logs/broker3/
  9. zookeeper.connect=zk1:2181,zk2:2181,zk3:2181
  10.  
  11. vi server2.properties 修改以下参数:
  12. broker.id=4
  13. port=9092
  14. host.name=172.16.212.102
  15. log.dirs=/tmp/kafka-logs/broker4/
  16. zookeeper.connect=zk1:2181,zk2:2181,zk3:2181
  17.  

Kafka启动

Node1:

  1. cd $KAFKA_HOME/bin
  2. nohup ./kafka-server-start.sh $KAFKA_HOME/config/server1.properties &
  3. nohup ./kafka-server-start.sh $KAFKA_HOME/config/server2.properties &

Node2:

  1. cd $KAFKA_HOME/bin
  2. nohup ./kafka-server-start.sh $KAFKA_HOME/config/server3.properties &
  3. nohup ./kafka-server-start.sh $KAFKA_HOME/config/server4.properties &
  4.  

启动后,可以在Zookeeper中看到4个brokers:

[zk: localhost:2181(CONNECTED) 4] ls /brokers/ids

[3, 2, 1, 4]

创建topic

在任意Node上,

  1. cd $KAFKA_HOME/bin
  2. ./kafka-topics.sh --create --zookeeper zk1:2181,zk2:2181,zk3:2181 --replication-factor 2 --partitions 2 --topic lxw1234.com

创建一个名为lxw1234.com的topic.

查看topic

  1. cd $KAFKA_HOME/bin
  2. ./kafka-topics.sh --describe --zookeeper zk1:2181,zk2:2181,zk3:2181 --topic lxw1234.com
  3. Topic:lxw1234.com PartitionCount:2 ReplicationFactor:2 Configs:
  4. Topic: lxw1234.com Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
  5. Topic: lxw1234.com Partition: 1 Leader: 2 Replicas: 2,3 Isr: 2,3

模拟producer发送消息

  1. cd $KAFKA_HOME/bin
  2. ./kafka-console-producer.sh --broker-list 172.16.212.17:9091,172.16.212.17:9092,172.16.212.102:9091,172.16.212.102:9092 --topic lxw1234.com

启动之后,在控制台上可以先输入一些消息:

  1. [root@dev bin]# ./kafka-console-producer.sh --broker-list 172.16.212.17:9091,172.16.212.17:9092,172.16.212.102:9091,172.16.212.102:9092 --topic lxw1234.com
  2. [2015-09-24 14:03:24,616] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
  3. This is Kafka producer.
  4. Hello, lxw1234.com.

模拟consumer接收消息

  1. cd $KAFKA_HOME/bin
  2. ./kafka-console-consumer.sh --zookeeper zk1:2181,zk2:2181,zk3:2181 --topic lxw1234.com --from-beginning
  3. This is Kafka producer.
  4. Hello, lxw1234.com.

接下来可以在producer的控制台中输入其他消息,看看consumer的控制台是否能正常打印出来。

删除topic

  1. cd $KAFKA_HOME/bin
  2. ./kafka-topics.sh --delete --zookeeper zk1:2181,zk2:2181,zk3:2181 --topic lxw1234.com

执行后,该topic会被kafka标记为删除,还需要在zookeeper中手动删除相关的节点:

[zk: localhost:2181(CONNECTED) 5] rmr /brokers/topics/lxw1234.com

停止Kafka

cd $KAFKA_HOME/bin

./kafka-server-stop.sh

或者找到kafka的进程,直接kill掉即可。


0 0
原创粉丝点击