centos7下kafka集群搭建
来源:互联网 发布:1大学生网络党校 编辑:程序博客网 时间:2024/06/04 23:22
概述
集群安装或者单机安装都可以,这里介绍集群安装。Kafka本身安装包也自带了zookeeper,也可以使用其自带的zookeeper。建议试用自己安装的zookeeper,本教程试用单独安装的zookeeper。
安装环境
3台centos7虚拟机:10.15.21.62 10.10.182.168 10.10.182.169
kafka_2.10-0.10.2.0
zookeeper-3.4.9
jdk-1.8.0(安装不做赘述)
环境初始化
定义目录结构。
mkdir /opt/zookeepermkdir /opt/zkdatamkdir /opt/zkdatalogmkdir /opt/kafka
下载软件
cd /opt/zookeeperwget https://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gzcd /opt/kafka/wget http://mirror.bit.edu.cn/apache/kafka/0.10.2.0/kafka_2.10-0.10.2.0.tgz
Zookeeper集群搭建
- 修改配置文件
cd /opt/zookeeper/ tar –xvf zookeeper-3.4.9.tar.gz cd zookeeper-3.4.9/conf cp zoo_sample.cfg zoo.cfg vi zoo.cfg
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial# synchronization phase can takeinitLimit=10# The number of ticks that can pass between# sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just# example sakes.dataDir=/opt/zookeeper/zkdatadataLogDir=/opt/zookeeper/zkdatalog# the port at which the clients will connectclientPort=2181server.1=zk1:2888:3888server.2=zk2:2888:3888server.3=zk3:2888:3888# the maximum number of client connections.# increase this if you need to handle more clients
server.1 这个1是服务器的标识也可以是其他的数字, 表示这个是第几号服务器,用来标识服务器,这个标识后续会用到。
第一个端口是master和slave之间的通信端口,默认是2888,第二个端口是leader选举的端口。默认3888。
修改完成后,保存即可。
server1:
echo "1" > /opt/zookeeper/zkdata/myid
server2:
echo "2" > /opt/zookeeper/zkdata/myid
server3:
echo "3" > /opt/zookeeper/zkdata/myid
3.启动服务并查看
依次启动服务然后再查看服务状态
cd /opt/zookeeper/zookeeper-3.4.9/bin ./zkServer.sh start ./zkServer.sh status
Kafka集群搭建
- 修改配置文件
cd /opt/kafka/kafka_2.10-0.10.2.0/config/vi server.config
# The id of the broker. This must be set to a unique integer for each broker.broker.id=1 #当前机器在集群中的唯一标识# Switch to enable topic deletion or not, default value is falsedelete.topic.enable=true############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured.# FORMAT:# listeners = listener_name://host_name:port# EXAMPLE:# listeners = PLAINTEXT://your.host.name:9092listeners=PLAINTEXT://10.15.21.62:9092 #监听端口# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value# returned from java.net.InetAddress.getCanonicalHostName().advertised.listeners=PLAINTEXT://10.15.21.62:9092 #提供给生产者,消费者的端口号。可以不设置则使用listeners的值# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads handling network requestsnum.network.threads=4# The number of threads doing disk I/Onum.io.threads=30# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log fileslog.dirs=/tmp/kafka-logs# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1#默认的分区数# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk.# There are a few important trade-offs here:# 1. Durability: Unflushed data may be lost if you are not using replication.# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.# Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.# The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletion due to agelog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining# segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect= zk1:2181,zk2:2181,zk3:2181 #zookeeper连接地址# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000
- 启动kafka集群并测试
在集群每个主机内执行:
cd /opt/kafka/kafka_2.10-0.10.2.0/bin/./kafka-server-start.sh -daemon ../config/server.properties
- 验证是否安装成功
任意一台主机创建一个topic
./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
在一台主机上启动消费者消费消息
./kafka-console-consumer.sh --zookeeper zk1:2181,zk2:2181,zk3:2181 --topic test
在另一一台主机上发布消息
./kafka-console-producer.sh --broker-list 10.15.21.62:9092,10.10.182.168:9092,10.10.182.169:9092 --topic test
输入要发布的信息,并按回车。
将会看到消费者消费了消息。说明安装成功。
阅读全文
0 0
- centos7下kafka集群搭建
- CentOS7环境下搭建Kafka
- CentOS7上部署搭建Kafka集群
- CentOS7下搭建redis集群
- 【Kafka】Kafka集群搭建
- Centos7 安装Kafka集群
- Centos 下Kafka集群的搭建
- 【Linux】ubuntu下kafka集群环境搭建
- 【Linux】ubuntu下kafka集群环境搭建,kafka命令
- Centos7 下zookeeper3.4.9集群搭建
- 虚拟机下用CentOS7搭建Hadoop集群
- CentOS7下基于Hadoop2.7.3集群搭建
- centos7环境下搭建storm集群
- 在centos7下搭建redis集群
- CentOS7环境下搭建storm集群
- centos7下完全式hadoop集群搭建
- kafka集群搭建
- kafka集群搭建
- 事件内核对象Event
- 用变量定义模式匹配字符串时,用单引号和双引号的区别
- 华中、华北、华南、西北、东北、西南、华东各包括的省市
- SharePoint Server 2016 中的数据库类型和说明
- tmpfs介绍
- centos7下kafka集群搭建
- Argestes and Sequence HDU
- CUDAArray的数据存储顺序
- Android View动画(视图动画)
- 本地访问json格式文件出现XMLHttpRequest cannot load的解决方法
- org.apache.jasper.JasperException: java.lang.ClassCastException
- Redis
- 欧拉回路,欧拉路径,欧拉图详解
- 够壕!支付宝新楼“蚂蚁Z空间”启用 网友:公司缺人吗