Kafka整合Flume

来源:互联网 发布:sql outer 编辑:程序博客网 时间:2024/05/18 19:36

Kafkaflume

1)准备jar

1、将Kafka主目录lib下的如下jar拷贝至Flumelib目录下

kafka_2.10-0.8.2.1.jar、kafka-clients-0.8.2.1.jar、jopt-simple-3.2.jar、metrics-core-2.2.0.jar、scala-library-2.10.4.jar、zkclient-0.3.jar等

2、将如下jar拷贝至flume主目录下,上述1是其依赖的jar

下载flume、kafka插件包,flumeng-kafka-plugin.jar

2)配置flume.conf文件,如下

#agent section
producer.sources = s
producer.channels = c
producer.sinks = r


#source section
#producer.sources.s.type = seq
producer.sources.s.channels = c
producer.sources.s.type = exec
producer.sources.s.command = tail -fn 1 /letv/logs/test.log
# Each sink's type must be defined
producer.sinks.r.type = org.apache.flume.plugins.KafkaSink
producer.sinks.r.metadata.broker.list=10.148.13.10:9092,10.148.13.11:9092,10.148.13.12:9092,10.148.13.13:9092,10.148.13.14:9092,10.148.13.15:9092,10.148.13.16:9092,10.148.13.17:9092,10.148.13.18:9092,10.148.13.19:9092
#producer.sinks.r.partition.key=0
#producer.sinks.r.partitioner.class=org.apache.flume.plugins.SinglePartition
producer.sinks.r.serializer.class=kafka.serializer.StringEncoder
producer.sinks.r.request.required.acks=-1
producer.sinks.r.max.message.size=1000000
producer.sinks.r.producer.type=sync
producer.sinks.r.custom.encoding=UTF-8
producer.sinks.r.custom.topic.name=test-topic


#Specify the channel the sink should use
producer.sinks.r.channel = c


# Each channel's type is defined.
producer.channels.c.type = memory
producer.channels.c.capacity = 1000000
producer.channels.c.transactionCapacity = 1000000


测试:

  1. 启动zookeeper服务,kafka依赖的组件
  2. 启动kafka服务,同时创建topic名为test-topic

./kafka-topics.sh --create --zookeeper master-active:2181 --replication-factor 1 --partitions 1 --topic  test-topic

  1. 启动Flume服务

./flume-ng agent -n agent -f ../conf/flume-kafka.conf -Dflume.root.logger=INFO,console

  1. 使用echo “hello world , kafka and flume !” >>/letv/logs/test.log
  2. 启动kafkaconsumer即可查看输入文件流

./kafka-console-consumer.sh --zookeeper master-active:2181 --topictest-topic--from-beginning



0 0
原创粉丝点击