kafka配置文件记录

来源:互联网 发布:凭都软件下载 编辑:程序博客网 时间:2024/06/14 04:40
# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at# #    http://www.apache.org/licenses/LICENSE-2.0# # Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## 每一个broker在集群中的唯一表示,要求是正数。当该服务器的IP地址发生改变时,broker.id没有变化,则不会影响consumers的消息情况broker.id=0############################# Socket Server Settings #############################listeners=PLAINTEXT://:9092# The port the socket server listens on#port=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaces#host.name=master# Hostname the broker will advertise to producers and consumers. If not set, it uses the# value for "host.name" if configured.  Otherwise, it will use the value returned from# java.net.InetAddress.getCanonicalHostName().#advertised.host.name=<hostname routable by clients># The port to publish to ZooKeeper for clients to use. If this is not set,# it will publish the same port that the broker binds to.#advertised.port=<port accessible by clients># broker处理消息的最大线程数,一般情况下不需要去修改num.network.threads=3# broker处理磁盘IO的线程数,数值应该大于你的硬盘数num.io.threads=8# socket的发送缓冲区,socket的调优参数SO_SNDBUFFsocket.send.buffer.bytes=102400# socket的接受缓冲区,socket的调优参数SO_RCVBUFFsocket.receive.buffer.bytes=102400# socket请求的最大数值,防止serverOOM,message.max.bytes必然要小于socket.request.max.bytes,会被topic创建时的指定参数覆盖socket.request.max.bytes=104857600############################# Log Basics ############################## kafka数据的存放地址,多个地址的话用逗号分割 /data/kafka-logs-1,/data/kafka-logs-2log.dirs=/tmp/kafka-logs#每个topic的分区个数,若是在topic创建时候没有指定的话会被topic创建时的指定参数覆盖num.partitions=2# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# 每个数据目录用来日志恢复的线程数目num.recovery.threads.per.data.dir=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here:#    1. Durability: Unflushed data may be lost if you are not using replication.#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# 数据存储的最大时间保24*7 7 天log.retention.hours=168# 每个topic下每个partition保存数据的总量;注意,这是每个partitions的上限,因此这个数值乘以partitions的个数就是每个topic保存的数据总量。同时注意:如果log.retention.hours和log.retention.bytes都设置了,则超过了任何一个限制都会造成删除一个段文件。#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# 检查日志分段文件的间隔时间,以确定是否文件属性是否到达删除要求。log.retention.check.interval.ms=300000#当这个属性设置为false时,一旦日志的保存时间或者大小达到上限时,就会被删除;如果设置为true,则当保存属性达到上限时,就会进行log compaction。log.cleaner.enable=false############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000
Here is our server production server configuration:# Replication configurationsnum.replica.fetchers=4replica.fetch.max.bytes=1048576replica.fetch.wait.max.ms=500replica.high.watermark.checkpoint.interval.ms=5000replica.socket.timeout.ms=30000replica.socket.receive.buffer.bytes=65536replica.lag.time.max.ms=10000controller.socket.timeout.ms=30000controller.message.queue.size=10# Log configurationnum.partitions=8message.max.bytes=1000000auto.create.topics.enable=truelog.index.interval.bytes=4096log.index.size.max.bytes=10485760log.retention.hours=168log.flush.interval.ms=10000log.flush.interval.messages=20000log.flush.scheduler.interval.ms=2000log.roll.hours=168log.retention.check.interval.ms=300000log.segment.bytes=1073741824# ZK configurationzookeeper.connection.timeout.ms=6000zookeeper.sync.time.ms=2000# Socket server configurationnum.io.threads=8num.network.threads=8socket.request.max.bytes=104857600socket.receive.buffer.bytes=1048576socket.send.buffer.bytes=1048576queued.max.requests=16fetch.purgatory.purge.interval.requests=100producer.purgatory.purge.interval.requests=100
0 0
原创粉丝点击