Flume与Kafka整合案例详解

来源:互联网 发布:ipad pro 知乎 编辑:程序博客网 时间:2024/05/29 11:42

环境配置

名称 版本 下载地址 Centos 7.0 64x 百度 Zookeeper 3.4.5 Flume 1.6.0 Kafka 2.1.0

配置Flume

这里就不介绍了零基础出门右转看Flume的文章

flume笔记

直接贴配置文件

[root@zero239 kafka_2.10-0.10.1.1]# cat /opt/hadoop/apache-flume-1.6.0-bin/conf/kafka-conf.properties # The configuration file needs to define the sources,# the channels and the sinks.# Sources, channels and sinks are defined per agent,# in this case called 'agent'agent.sources = r1agent.channels = c1agent.sinks = s1# For each one of the sources, the type is defined#agent.sources.r1.type = spooldir#agent.sources.r1.command = /opt/test/logs/data#agent.sources.r1.fileHeader = true#agent.sources.r1.channels = c1agent.sources.r1.type = spooldiragent.sources.r1.spoolDir = /opt/test/logs/dataagent.sources.r1.fileHeader = true# Each sink's type must be defined#agent.sinks.s1.type = loggeragent.sinks.s1.type = org.apache.flume.sink.kafka.KafkaSinkagent.sinks.s1.topic = logstestagent.sinks.s1.brokerList = zero230:9092agent.sinks.s1.requiredAcks = 1agent.sinks.s1.batchSize = 2# Each channel's type is defined.agent.channels.c1.type = memoryagent.channels.c1.capacity = 100agent.sources.r1.channels = c1agent.sinks.s1.channel = c1

配置Kafka

# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##    http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=2# Switch to enable topic deletion or not, default value is false#delete.topic.enable=true############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured.#   FORMAT:#     listeners = security_protocol://host_name:port#   EXAMPLE:#     listeners = PLAINTEXT://your.host.name:9092#listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured.  Otherwise, it will use the value# returned from java.net.InetAddress.getCanonicalHostName().#advertised.listeners=PLAINTEXT://your.host.name:9092# The number of threads handling network requestsnum.network.threads=3# The number of threads doing disk I/Onum.io.threads=8# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log fileslog.dirs=/opt/hadoop/kafka_2.10-0.10.1.1/logs/tmp# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk.# There are a few important trade-offs here:#    1. Durability: Unflushed data may be lost if you are not using replication.#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.# The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletionlog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining# segments don't drop below log.retention.bytes.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=zero230:2181,zero231:2181,zero239:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000

我已经配置了集群Zookeeper所以在这里我指定是我配置的Zookeeper地址如果你没有配置的话可以直接使用Kafka内置的Zokeeper

Zookeeper集群搭建配置

启动Kafka验证是否成功

  1. 启动Zookeeper 如果没有配置集群的这一步跳过
  2. 启动Kafka内置Zookeeper

    bin/zookeeper-server-start.sh config/zookeeper.properties

3.启动Kafka

server1.properties 为刚刚自己编辑的名称bin/kafka-server-start.sh config/server1.properties

4.创建一个名为logstesttopic

./bin/kafka-topics.sh --create --zookeeper zero230:2181 --replication-factor 1 --partitions 1 --topic logstest

5.查看Topic是否创建成功

./bin/kafka-topics.sh --list --zookeeper localhost:2181

6.创建一个生产端(相当于是一个已经数据产生的用户吧)这样容易理解

bin/kafka-console-producer.sh --broker-list zero230:9092 --topic logstest

7.创建一个消费端(意思就是可以看到生产者意思就是生产出来的数据可以看到输出)

bin/kafka-console-consumer.sh --zookeeper zero230:2181 --topic logstest --from-beginning

启动验证Flume是否能与Kafka对接

[root@zero239 apache-flume-1.6.0-bin]# ./bin/flume-ng agent --conf conf -f ./conf/kafka-conf.properties -n agent -Dflume.root.logger=INFO,console

对接成功截图

各位同学可以看到在Flumesinks配置中我设置的是Kafka意思就是输出到Kafka

agent.sinks.s1.type = org.apache.flume.sink.kafka.KafkaSinkagent.sinks.s1.topic = logstest 刚刚创建的Topic名称agent.sinks.s1.brokerList = zero230:9092 创建生产的机

在这里Flume与Kafka已经整合完毕了。

下节剧透

JAVA实现Kafka读取

0 0
原创粉丝点击