kafka集群部署

来源:互联网 发布:创意个人域名 编辑:程序博客网 时间:2024/06/05 00:26

必要条件——Java环境

  • 安装脚本:
    kafka.sh
KAFKA_HOME="/usr/local/kafka_2.12-0.10.2.1"DATA_DIR="/app/data/kafka"LOG_DIR="/app/logs/kafka"[ -f /app/install/kafka_2.12-0.10.2.1.tgz ] || \wget http://www.apache.org/dist/kafka/0.10.2.1/kafka_2.12-0.10.2.1.tgz && \sudo mv kafka_2.12-0.10.2.1.tgz /app/install/function decompress() {        if [ ! -d "${KAFKA_HOME}" ];then        echo "test"                sudo tar zxvf /app/install/kafka_2.12-0.10.2.1.tgz -C /usr/local/                sudo chmod 755 -R ${KAFKA_HOME}        fi}function copyConfTemplate() {        sudo cp /app/install/kafka_conf/* ${KAFKA_HOME}/config/}function localize() {        sudo mkdir -p "${LOG_DIR}"        sudo chmod a+w -R "${LOG_DIR}"        sudo mkdir -p ${DATA_DIR}        sudo chmod a+w -R ${DATA_DIR}        host_ip=`ifconfig eth0 | grep -w "inet" | awk '{ print $2}'`        broker_id=-1        if [ "${host_ip}" == "10.112.170.191" ];then                broker_id=0        elif [ "${host_ip}" == "10.112.170.192" ];then                broker_id=1        elif [ "${host_ip}"  == "10.112.170.193" ];then                broker_id=2        fi        echo "${host_ip}----${broker_id}"        sudo sed -i  "s/id_placeholder/${broker_id}/g" `grep "id_placeholder" -l ${KAFKA_HOME}/config/server.properties`        sudo sed -i "s/ip_placeholder/${host_ip}/g" `grep "ip_placeholder" -l ${KAFKA_HOME}/config/server.properties`        sudo sed "16 iLOG_DIR=${LOG_DIR}" -i ${KAFKA_HOME}/bin/kafka-run-class.sh}decompress;copyConfTemplate;localize;
  • 配置文件放在/app/install/kafka_conf下:
    log4j.properties
log4j.rootLogger=INFO, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.loglog4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.loglog4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.loglog4j.appender.requestAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.loglog4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.loglog4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HHlog4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.loglog4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n# Turn on all our debugging info#log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG, kafkaAppender#log4j.logger.kafka.client.ClientUtils=DEBUG, kafkaAppender#log4j.logger.kafka.perf=DEBUG, kafkaAppender#log4j.logger.kafka.perf.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender#log4j.logger.org.I0Itec.zkclient.ZkClient=DEBUGlog4j.logger.kafka=INFO, kafkaAppenderlog4j.logger.kafka.network.RequestChannel$=WARN, requestAppenderlog4j.additivity.kafka.network.RequestChannel$=false#log4j.logger.kafka.network.Processor=TRACE, requestAppender#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender#log4j.additivity.kafka.server.KafkaApis=falselog4j.logger.kafka.request.logger=WARN, requestAppenderlog4j.additivity.kafka.request.logger=falselog4j.logger.kafka.controller=TRACE, controllerAppenderlog4j.additivity.kafka.controller=falselog4j.logger.kafka.log.LogCleaner=INFO, cleanerAppenderlog4j.additivity.kafka.log.LogCleaner=falselog4j.logger.state.change.logger=TRACE, stateChangeAppenderlog4j.additivity.state.change.logger=false#Change this to debug to get the actual audit log for authorizer.log4j.logger.kafka.authorizer.logger=WARN, authorizerAppenderlog4j.additivity.kafka.authorizer.logger=false

producer.properties

bootstrap.servers=localhost:9092compression.type=none

server.properties

broker.id=id_placeholder# The number of threads handling network requestsnum.network.threads=2# The number of threads doing disk I/Onum.io.threads=2# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log fileslog.dirs=/app/logs/kafka/kafka-logs-0# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1# The minimum age of a log file to be eligible for deletionlog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining# segments don't drop below log.retention.bytes.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=zk-01:2181,zk-02:2181,zk-03:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000advertised.host.name=ip_placeholder

tools-log4j.properties

log4j.rootLogger=WARN, stderrlog4j.appender.stderr=org.apache.log4j.ConsoleAppenderlog4j.appender.stderr.layout=org.apache.log4j.PatternLayoutlog4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%nlog4j.appender.stderr.Target=System.err

zookeeper.properties

# the directory where the snapshot is stored.dataDir=/tmp/zookeeper# the port at which the clients will connectclientPort=2181# disable the per-ip limit on the number of connections since this is a non-production configmaxClientCnxns=0
  • 执行kafka.sh
    sh kafka.sh
  • 启动
    /usr/local/kafka_2.12-0.10.2.1/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.12-0.10.2.1/config/server.properties
  • 验证
    jps