Kafka集群搭建

来源:互联网 发布:皖南和皖北的差距知乎 编辑:程序博客网 时间:2024/05/22 04:37

Kafka使用背景

在我们大量使用分布式数据库、分布式计算集群的时候,是否会遇到这样一些问题:

  • 我想分析一下用户行为(pageviews),以便我能设计出更好的广告位;
  • 我想对用户的搜索关键词进行统计,分析出前的流行趋势;
  • 有些数据,存数据库浪费,直接存硬盘操作效率又低;

这个时候,就可以用消息系统了,尤其是分布式消息系统;

另外:

    在很多常见的大数据处理场景中,我们需要对数据进行离线分析和实时分析,离线分析借助于hadoop相关框架(mapreduce、hive等),对于实时需求可以使用storm,为了统一离线和实时计算,我们可以将离线和实时计算的数据源统一作为输入,然后将数据的流向分别经由离线分析系统和实时系统,分别进行分析处理,这是我们可以考虑将数据源(flume收集)直接连接一个消息中间件,如kafka,整合flume + kafka,flume作为消息的生产者,产生的消息数据(日志数据、业务数据等)发布到kafka中,然后使用Storm的Topology作为消息的Consumer,在Storm集群中分别进行如下两个需求场景的处理:

  • 直接使用Storm的Topology对数据进行实时分析处理
  • 整合Storm+HDFS,将消息处理后写入HDFS进行离线分析处理

kafka的定义

    是一个分布式消息系统,由LinkedIn使用Scala编写,用作LinkedIn的活动流(Activity Stream)和运营数据处理管道(Pipeline)的基础,具有高水平扩展和高吞吐量。

应用领域: 已被多家不同类型的公司作为多种类型的数据管道和消息系统使用。如:

淘宝,支付宝,百度,twitter等

目前越来越多的开源分布式处理系统如Apache flume、Apache Storm、Spark,elasticsearch都支持与Kafka集成


AMQP协议(advanced message queue protocol


基本消费者(Consumer):从消息队列中请求消息的客户端应用程序;
生产者(Producer):向broker发布消息的客户端应用程序;
AMQP服务器端(broker):用来接收生产者发送的消息并将这些消息路由给服务器中的队列;

kafka基本架构


主题(Topic):即一种类型;如一个主题类似新闻中的体育、娱乐、教育等分类概念,在实际工程中通常一个业务一个主题;
分区(Partition):一个topic中的消息数据按照多个分区组织,分区是kafka消息队列组织的最小单位,一个分区可以看做是一个FIFO的队列;

kafka集群搭建

zookeeper集群搭建(见前面章节)

kafka集群

环境准备

  • 服务器3台(192.168.0.102,192.168.0.103,192.168.0.104
  • kafka版本kafka_2.9.2-0.8.1.1 

配置环境

  1. 进入自己的目录cd /usr/local/program  创建文件夹mkdir kafkaLogs
  2. 配置环境变量:
  3. #set enviromentexport JAVA_HOME=/usr/local/program/jdk1.7.0_79export ZK_HOME=/usr/local/program/zk/zookeeper-3.4.6export KAFKA_HOME=/usr/local/program/kafka/kafka_2.9.2-0.8.1.1export PATH=$JAVA_HOME/bin:$ZK_HOME/bin:$KAFKA_HOME/bin:$PATH

  4. 配置kafka,cd /usr/local/program/kafka/kafka_2.9.2-0.8.1.1/config/ 编辑server.properties 

    # Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at# #    http://www.apache.org/licenses/LICENSE-2.0# # Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.#机器在集群中的唯一标识broker.id=0############################# Socket Server Settings ############################## The port the socket server listens on#对外提供服务的tcp端口 默认9092port=19092# Hostname the broker will bind to. If not set, the server will bind to all interfaces#主机ip,默认localhost host.name=192.168.0.102# Hostname the broker will advertise to producers and consumers. If not set, it uses the# value for "host.name" if configured.  Otherwise, it will use the value returned from# java.net.InetAddress.getCanonicalHostName().#advertised.host.name=<hostname routable by clients># The port to publish to ZooKeeper for clients to use. If this is not set,# it will publish the same port that the broker binds to.#advertised.port=<port accessible by clients># The number of threads handling network requests#broker进行网络处理的线程数num.network.threads=3 # The number of threads doing disk I/O#broker进行io处理的线程数num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server#kafka发送消息的缓冲区5m 默认102400socket.send.buffer.bytes=1048576# The receive buffer (SO_RCVBUF) used by the socket server#kafka接收消息的缓冲区5m 默认102400socket.receive.buffer.bytes=1048576# The maximum size of a request that the socket server will accept (protection against OOM)#向kafka请求 或者接收消息的最大数,不能超过java堆栈大小socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log files#kafka消息日志目录 多个以逗号分割log.dirs=/usr/local/program/kafka/kafkaLogs# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.#每个topic的分区数num.partitions=2############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here:#    1. Durability: Unflushed data may be lost if you are not using replication.#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletion#kafka消息的驻留时间,168小时【7天】log.retention.hours=168#往kafka发送的消息每条不超过的大小5m(默认为1m)message.max.byte=5048576#默认的复制因子,每个topic中的partion的副本(默认为1)default.replication.factor=2replica.fetch.max.bytes=5048576# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining# segments don't drop below log.retention.bytes.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.#每个消息文件的大小(因为消息是追加写入),超过这个数就会新起一个文件log.segment.bytes=536870912# The interval at which log segments are checked to see if they can be deleted according # to the retention policies#每隔这个时间查看kafkalog是否失效,即查看是否有过期消息,如果有则删除log.retention.check.interval.ms=60000# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.log.cleaner.enable=false############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.#zookeeper集群地址zookeeper.connect=192.168.0.102:12181,192.168.0.103:12181,192.168.0.104:12181# Timeout in ms for connecting to zookeeper#kafka集群连接zk的超时时间zookeeper.connection.timeout.ms=1000000

  5. 同样的操作对于其它机器,配置文件的broker.id分别改为1和2,hostname对应相应的主机ip
  6. 后台启动kafka,执行命令:kafka-server-start.sh /usr/local/program/kafka/kafka_2.9.2-0.8.1.1/config/server.properties & 
  7. 同样的操作对于其它机器
  8. 创建主题topic,执行命令:kafka-topics.sh –create –zookeeper localhost:12181 –replication-factor 3 –partitions 1 –topic my-replicated-topic
  9. 查看主题topic,执行命令:kafka-topics.sh –describe –zookeeper localhost:12181 –topic my-replicated-topic
  10. 创建生产者producer,执行命令:kafka-console-producer.sh –broker-list 192.168.0.102:19092 –topic my-replicated-topic
  11. 创建消费者consumer(另外一台机器),执行命令:kafka-console-consumer.sh –zookeeper localhost:12181 –from-beginning –topic my-replicated-topic

    测试:生产者者命令行输入信息,即可在消费者命令行看到对应消息的输出。

  12. ok 至此搭建完成kafka集群。


参考:jikexueyuan

0 0