CentOS 7.0安装kafka

来源:互联网 发布:安卓访问windows共享 编辑:程序博客网 时间:2024/06/05 21:56

环境

CentOS 7.0

hadoop 2.7.3 CentOS 7.0+hadoop 2.7搭建集群

zookeeper 3.4.11 CentOS 7.0安装zookeeper3.4.11

kafka 1.0.0

下载安装

下载

官网:http://kafka.apache.org

​ 可选择左侧的download 或者 quickstart ,下载kafka

百度云地址:https://pan.baidu.com/s/1eRBoaK6 密码:yti2

wget地址:

wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz

安装

我下载到的目录/data/software,安装到/opt/kafka

mkdir /opt/kafkatar -zxvf kafka_2.11-1.0.0.tgz -C /opt/kafka/

查看目录结构

cd /opt/kafka/kafka_2.11-1.0.0tree# 输出.├── bin│   ├── connect-distributed.sh│   ├── connect-standalone.sh│   ├── kafka-acls.sh│   ├── kafka-broker-api-versions.sh│   ├── kafka-configs.sh│   ├── kafka-console-consumer.sh│   ├── kafka-console-producer.sh│   ├── kafka-consumer-groups.sh│   ├── kafka-consumer-perf-test.sh│   ├── kafka-delete-records.sh│   ├── kafka-log-dirs.sh│   ├── kafka-mirror-maker.sh│   ├── kafka-preferred-replica-election.sh│   ├── kafka-producer-perf-test.sh│   ├── kafka-reassign-partitions.sh│   ├── kafka-replay-log-producer.sh│   ├── kafka-replica-verification.sh│   ├── kafka-run-class.sh│   ├── kafka-server-start.sh│   ├── kafka-server-stop.sh│   ├── kafka-simple-consumer-shell.sh│   ├── kafka-streams-application-reset.sh│   ├── kafka-topics.sh│   ├── kafka-verifiable-consumer.sh│   ├── kafka-verifiable-producer.sh│   ├── trogdor.sh│   ├── windows│   │   ├── connect-distributed.bat│   │   ├── connect-standalone.bat│   │   ├── kafka-acls.bat│   │   ├── kafka-broker-api-versions.bat│   │   ├── kafka-configs.bat│   │   ├── kafka-console-consumer.bat│   │   ├── kafka-console-producer.bat│   │   ├── kafka-consumer-groups.bat│   │   ├── kafka-consumer-offset-checker.bat│   │   ├── kafka-consumer-perf-test.bat│   │   ├── kafka-mirror-maker.bat│   │   ├── kafka-preferred-replica-election.bat│   │   ├── kafka-producer-perf-test.bat│   │   ├── kafka-reassign-partitions.bat│   │   ├── kafka-replay-log-producer.bat│   │   ├── kafka-replica-verification.bat│   │   ├── kafka-run-class.bat│   │   ├── kafka-server-start.bat│   │   ├── kafka-server-stop.bat│   │   ├── kafka-simple-consumer-shell.bat│   │   ├── kafka-topics.bat│   │   ├── zookeeper-server-start.bat│   │   ├── zookeeper-server-stop.bat│   │   └── zookeeper-shell.bat│   ├── zookeeper-security-migration.sh│   ├── zookeeper-server-start.sh│   ├── zookeeper-server-stop.sh│   └── zookeeper-shell.sh├── config│   ├── connect-console-sink.properties│   ├── connect-console-source.properties│   ├── connect-distributed.properties│   ├── connect-file-sink.properties│   ├── connect-file-source.properties│   ├── connect-log4j.properties│   ├── connect-standalone.properties│   ├── consumer.properties│   ├── log4j.properties│   ├── producer.properties│   ├── server.properties│   ├── tools-log4j.properties│   └── zookeeper.properties├── libs│   ├── aopalliance-repackaged-2.5.0-b32.jar│   ├── argparse4j-0.7.0.jar│   ├── commons-lang3-3.5.jar│   ├── connect-api-1.0.0.jar│   ├── connect-file-1.0.0.jar│   ├── connect-json-1.0.0.jar│   ├── connect-runtime-1.0.0.jar│   ├── connect-transforms-1.0.0.jar│   ├── guava-20.0.jar│   ├── hk2-api-2.5.0-b32.jar│   ├── hk2-locator-2.5.0-b32.jar│   ├── hk2-utils-2.5.0-b32.jar│   ├── jackson-annotations-2.9.1.jar│   ├── jackson-core-2.9.1.jar│   ├── jackson-databind-2.9.1.jar│   ├── jackson-jaxrs-base-2.9.1.jar│   ├── jackson-jaxrs-json-provider-2.9.1.jar│   ├── jackson-module-jaxb-annotations-2.9.1.jar│   ├── javassist-3.20.0-GA.jar│   ├── javassist-3.21.0-GA.jar│   ├── javax.annotation-api-1.2.jar│   ├── javax.inject-1.jar│   ├── javax.inject-2.5.0-b32.jar│   ├── javax.servlet-api-3.1.0.jar│   ├── javax.ws.rs-api-2.0.1.jar│   ├── jersey-client-2.25.1.jar│   ├── jersey-common-2.25.1.jar│   ├── jersey-container-servlet-2.25.1.jar│   ├── jersey-container-servlet-core-2.25.1.jar│   ├── jersey-guava-2.25.1.jar│   ├── jersey-media-jaxb-2.25.1.jar│   ├── jersey-server-2.25.1.jar│   ├── jetty-continuation-9.2.22.v20170606.jar│   ├── jetty-http-9.2.22.v20170606.jar│   ├── jetty-io-9.2.22.v20170606.jar│   ├── jetty-security-9.2.22.v20170606.jar│   ├── jetty-server-9.2.22.v20170606.jar│   ├── jetty-servlet-9.2.22.v20170606.jar│   ├── jetty-servlets-9.2.22.v20170606.jar│   ├── jetty-util-9.2.22.v20170606.jar│   ├── jopt-simple-5.0.4.jar│   ├── kafka_2.11-1.0.0.jar│   ├── kafka_2.11-1.0.0.jar.asc│   ├── kafka_2.11-1.0.0-javadoc.jar│   ├── kafka_2.11-1.0.0-javadoc.jar.asc│   ├── kafka_2.11-1.0.0-scaladoc.jar│   ├── kafka_2.11-1.0.0-scaladoc.jar.asc│   ├── kafka_2.11-1.0.0-sources.jar│   ├── kafka_2.11-1.0.0-sources.jar.asc│   ├── kafka_2.11-1.0.0-test.jar│   ├── kafka_2.11-1.0.0-test.jar.asc│   ├── kafka_2.11-1.0.0-test-sources.jar│   ├── kafka_2.11-1.0.0-test-sources.jar.asc│   ├── kafka-clients-1.0.0.jar│   ├── kafka-log4j-appender-1.0.0.jar│   ├── kafka-streams-1.0.0.jar│   ├── kafka-streams-examples-1.0.0.jar│   ├── kafka-tools-1.0.0.jar│   ├── log4j-1.2.17.jar│   ├── lz4-java-1.4.jar│   ├── maven-artifact-3.5.0.jar│   ├── metrics-core-2.2.0.jar│   ├── osgi-resource-locator-1.0.1.jar│   ├── plexus-utils-3.0.24.jar│   ├── reflections-0.9.11.jar│   ├── rocksdbjni-5.7.3.jar│   ├── scala-library-2.11.11.jar│   ├── slf4j-api-1.7.25.jar│   ├── slf4j-log4j12-1.7.25.jar│   ├── snappy-java-1.1.4.jar│   ├── validation-api-1.1.0.Final.jar│   ├── zkclient-0.10.jar│   └── zookeeper-3.4.10.jar├── LICENSE├── NOTICE└── site-docs    └── kafka_2.11-1.0.0-site-docs.tgz5 directories, 143 files

注:如果没有tree命令,先安装 yum install -y tree

配置

配置环境变量

vi /etc/profile# 添加export KAFKA_HOME=/opt/kafka/kafka_2.11-1.0.0export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$ZK_HOME/bin:$HBASE_HOME/bin:$FLUME_HOME/bin:$KAFKA_HOME/bin

配置server

cd /opt/kafka/kafka_2.11-1.0.0/config# 修改  注:在每个节点上,broker.id值不同# The id of the broker. This must be set to a unique integer for each broker.broker.id=1# 修改 注:listeners的host分别修改# The address the socket server listens on. It will get the value returned from# java.net.InetAddress.getCanonicalHostName() if not configured.#   FORMAT:#     listeners = listener_name://host_name:port#   EXAMPLE:#     listeners = PLAINTEXT://your.host.name:9092#listeners=PLAINTEXT://:9092listeners=PLAINTEXT://192.168.122.128:9092# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.# zookeeper.connect=localhost:2181zookeeper.connect=192.168.122.128:2181,192.168.122.129:2181,192.168.122.130:2181

注:在每个节点上,broker.id值不同,listeners的host也要修改(单机版的可不用)

启动

在启动kafka之前,确保zookeeper已经启动,启动zookeeper方法

cd /opt/zookeeper/zookeeper-3.4.11/bin./zkServer.sh start

启动kafka (以守护进程启动,kafka不会出现一段时间自动关闭)

cd /opt/kafka/kafka_2.11-1.0.0/bin./kafka-server-start.sh -daemon ../config/server.properties

查看是否启动

jps# 输出 12839842 ResourceManager52325 QuorumPeerMain64853 Bootstrap91382 Kafka39545 SecondaryNameNode60505 HRegionServer60346 HMaster91402 Jps39213 NameNode# 输出 12932498 Jps45738 DataNode5339 QuorumPeerMain120396 NodeManager32287 Kafka# 输出 1304593 QuorumPeerMain32118 Jps32103 Kafka45003 DataNode119466 NodeManager11532 HRegionServer

可以看到 Kafka已启动

注:停Kafka

./kafka-server-stop.sh

测试

创建topic

启动kafka后,创建topic

./kafka-topics.sh --create --zookeeper 192.168.122.128:2181 --replication-factor 1 --partitions 1 --topic test# 输出Created topic "test".

切换到另外两台机器129和130

./kafka-topics.sh --list --zookeeper 192.168.122.128:2181# 输出test

创建生产者

./kafka-console-producer.sh --broker-list 192.168.122.128:9092 --topic test# 输入hello worldhello worldhello pythonhello hadoophello kafka

创建消费者

在另外两台机器上,创建消费者

./kafka-console-consumer.sh --bootstrap-server 192.168.122.129:9092 --topic test --from-beginning./kafka-console-consumer.sh --bootstrap-server 192.168.122.130:9092 --topic test --from-beginning# 输出hello worldhello worldhello pythonhello hadoophello kafka

结果

至此,kafka测试完成,通过在128机器上启动kafka,创建topic,并创建生产者,生产消息;在另外两台机器129、130能够成功查看到128的topic,同时创建消费者,成功收到来自生产者的消息。

删除topic

# kafka/bincd /opt/kafka/kafka_2.11-1.0.0/bin# 查看topic列表# ./kafka-topics.sh --list --zookeeper Zookeeper地址./kafka-topics.sh --list --zookeeper 192.168.122.128:2181# 查看topic详细信息./kafka-topics.sh --describe --zookeeper 192.168.122.128:2181# 删除topic# ./kafka-topics.sh --delete --zookeeper Zookeeper地址 --topic topic名称./kafka-topics.sh --delete --zookeeper 192.168.122.128:2181 --topic test# 输出Topic test is marked for deletion.Note: This will have no impact if delete.topic.enable is not set to true.

观察topic是已经删除还是标记为删除,若是标记为删除,用以下方法彻底删除:

# 登录zookeeper客户端 cd /opt/zookeeper/zookeeper-3.4.11/bin/ ./zkCli.sh # 进入到zookeeper命令行 # 查看topic所在目录 ls /brokers/topics # 删除topic rmr /brokers/topics/topic名称 # 检查topic是否已经删除 ls /brokers/topics

踩到的坑

在以守护进程启动kafka时,输入jps时能够看到kafka进程,但是很快就没有了

由于启动后,没有报错,一开始以为是我关闭时操作错误造成的,一直在百度 启动kafka很快就关闭 的原因,也没查出个结果。后来一想,出错应该看日志啊,然后就找到kafka的相关日志,找到了如下信息:

FATAL [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)kafka.common.KafkaException: Socket server failed to bind to 192.169.122.128:9092: 无法指定被请求的地址.        at kafka.network.Acceptor.openServerSocket(SocketServer.scala:331)        at kafka.network.Acceptor.<init>(SocketServer.scala:256)        at kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:97)        at kafka.network.SocketServer$$anonfun$startup$1.apply(SocketServer.scala:89)        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)        at kafka.network.SocketServer.startup(SocketServer.scala:89)        at kafka.server.KafkaServer.startup(KafkaServer.scala:229)        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)        at kafka.Kafka$.main(Kafka.scala:92)        at kafka.Kafka.main(Kafka.scala)Caused by: java.net.BindException: 无法指定被请求的地址        at sun.nio.ch.Net.bind0(Native Method)        at sun.nio.ch.Net.bind(Net.java:433)        at sun.nio.ch.Net.bind(Net.java:425)        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)        at kafka.network.Acceptor.openServerSocket(SocketServer.scala:327)        ... 10 more

看到这个日志信息,撞墙的心都有了;通过排查日志,出错原因一目了然,就是ip写错了,192.168.122.128:9092 写成了 192.169.122.128:9092, ̄□ ̄||。改正过来,就能正常启动了。

踩坑总结 :在出错后,首先查看报错信息,找到报错原因,然后查资料排错;如果没有报错,就去查日志,日志排错是一个很好的途径。通过这个踩坑,让我更加重视日志信息,不至于出错后猜错误原因、乱投医。


参考:

kafka入门:简介、使用场景、设计原理、主要配置及集群搭建(转)

centos7安装kafka_2.11-1.0.0 新手入门

Centos7 安装Kafka集群

CentOS 7下Kafka集群安装

彻底删除Kafka中的topic

Kafka自动关闭问题

拓展: kafka中文教程

原创粉丝点击