kafka集群搭建
来源:互联网 发布:天猫商城的网络推广 编辑:程序博客网 时间:2024/06/07 04:09
快速教程
http://kafka.apache.org/quickstart
下载kafka
进入下载页面:http://kafka.apache.org/downloads.html
也可以直接在linux终端下载:
wget -q http://apache.fayea.com/apache-mirror/kafka/0.8.1/kafka_2.8.0-0.8.1.tgz
解压,修改config下的server.properties
主要修改
broker.id=第几个节点hostname=本机ipzookeeper.connect=zk-ip1:clinet-port,zk-ip2:clinet-port,zk-ip3:clinet-port
# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements. See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License. You may obtain a copy of the License at# # http://www.apache.org/licenses/LICENSE-2.0# # Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=2############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaces#host.name=localhosthost.name=10.100.6.177# Hostname the broker will advertise to producers and consumers. If not set, it uses the# value for "host.name" if configured. Otherwise, it will use the value returned from# java.net.InetAddress.getCanonicalHostName().#advertised.host.name=<hostname routable by clients># The port to publish to ZooKeeper for clients to use. If this is not set,# it will publish the same port that the broker binds to.#advertised.port=<port accessible by clients># The number of threads handling network requestsnum.network.threads=2# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=1048576# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=1048576# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log fileslog.dirs=/tmp/kafka-logs# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=2############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here:# 1. Durability: Unflushed data may be lost if you are not using replication.# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletionlog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining# segments don't drop below log.retention.bytes.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=536870912# The interval at which log segments are checked to see if they can be deleted according # to the retention policieslog.retention.check.interval.ms=60000# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.log.cleaner.enable=false############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=10.100.6.147:2181,10.100.6.176:2181,10.100.6.177:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=1000000
测试
启动Kafka server:
bin/kafka-server-start.sh config/server.properties &
停止Kafka server
bin/kafka-server-stop.sh
停止Zookeeper server:
bin/zookeeper-server-stop.sh
jps查看进程
分布式连通性测试
Zookeeper Server, Kafka Server, Producer都放在服务器server1上,ip地址为192.168.1.10
Consumer放在服务器server2上,ip地址为192.168.1.12。
分别运行server1的producer和server2的consumer
bin/kafka-console-producer.sh --broker-list 192.168.1.10:9092 --topic test
bin/kafka-console-consumer.sh --zookeeper 192.168.1.10:2181 --topic test --from-beginning
在producer的console端输入字符串hello kafka,consumer报hello kafka
0 0
- 【Kafka】Kafka集群搭建
- kafka集群搭建
- kafka集群搭建
- kafka集群搭建
- kafka集群搭建
- kafka集群搭建
- 搭建kafka集群
- 搭建kafka集群
- kafka集群环境搭建
- Kafka集群搭建
- 搭建kafka集群
- kafka集群环境搭建
- Zookeeper+Kafka集群搭建
- KAFKA集群环境搭建
- 搭建Kafka集群环境
- kafka集群搭建
- Kafka 集群搭建步骤
- kafka集群搭建
- android 中 系统日期时间的获取和日期的转换
- 用户画像全解析 | 都在说用户画像,你真的了解透了吗?
- NO1 用户设置
- Android开发软件架构思考以及经验总结
- 使用js生产二维码
- kafka集群搭建
- 关于https的访问请求相关总结
- docker上运行opengl依赖的服务
- 回发或回调参数无效(ASP.NET)
- 经纬度计算距离
- Eclipse快捷键
- vim的基本使用
- 17 - 01 - 09 POJ 1363 (栈模拟)
- Java学习之Iterator(迭代器)的一般用法