kafka+flume+hdfs实时日志流系统初探

来源:互联网 发布:域名已经预定 编辑:程序博客网 时间:2024/06/10 03:52

本次实验,主要为了测试将kafka的消息通过flume接收并存入hdfs,如果之前搭建过hadoop,flume,kafka的,这里会很快就会完成,思路比较清晰,主要配置在flume,flume是中间桥梁,负责将kafka和hdfs系统整合起来。因此也决定了系统启动顺序,先启动kafka或者hdfs,最后将flume启动,连接kafka,hdfs。

这次试验,我的系统环境分别是如下

  java:1.8

  kafka:2.11

  flume:1.6

  hadoop:2.6.

  其中,hadoop,kafka的配置均采用默认。

1、这里先启动hdfs,并在hdfs存储路径中新建一个目录(/usr/feiy/flume-data)准备存放flume收集的kafka消息。

$ sbin/start-dfs.sh

2、然后启动kafka服务,并创建一个topic(flume-data),然后还可以启动一个生产者控制台,准备往flume-data这个topic中生产消息,让flume来消费。

start zookeeper(进入kafka安装目录)

$ bin/zookeeper-server-start.sh config/zookeeper.properties

start kafka-server

$ bin/kafka-server-start.sh config/server.properties

create topic flume-data

$ bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic flume-data

setup kafka-console-producer

$ bin/kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic flume-data

3、配置flume,并启动,等待kafka生产者发送消息。

config conf/flume.conf(进入flume安装目录)

# The configuration file needs to define the sources, # the channels and the sinks.# Sources, channels and sinks are defined per agent, # in this case called 'agent'agent.sources = kafkaSourceagent.channels = memoryChannelagent.sinks = hdfsSink# The channel can be defined as follows.agent.sources.kafkaSource.channels = memoryChannelagent.sources.kafkaSource.type=org.apache.flume.source.kafka.KafkaSourceagent.sources.kafkaSource.zookeeperConnect=127.0.0.1:2181agent.sources.kafkaSource.topic=flume-data#agent.sources.kafkaSource.groupId=flumeagent.sources.kafkaSource.kafka.consumer.timeout.ms=100agent.channels.memoryChannel.type=memoryagent.channels.memoryChannel.capacity=1000agent.channels.memoryChannel.transactionCapacity=100# the sink of hdfsagent.sinks.hdfsSink.type=hdfsagent.sinks.hdfsSink.channel = memoryChannelagent.sinks.hdfsSink.hdfs.path=hdfs://master:9000/usr/feiy/flume-dataagent.sinks.hdfsSink.hdfs.writeFormat=Textagent.sinks.hdfsSink.hdfs.fileType=DataStream
start flume-ng

$ bin/flume-ng agent --conf conf --conf-file conf/flume.conf --name agent -Dflume.root.logger=INFO,console

send message

最后通过hdfs命令行查看生成的文件。

Last login: Sat Nov 19 13:15:16 2016 from 192.168.61.1[root@master ~]# hadoop fs -ls /usr/feiy/flume-dataFound 2 items-rw-r--r--   1 root supergroup         27 2016-11-19 13:46 /usr/feiy/flume-data/FlumeData.1479534366317-rw-r--r--   1 root supergroup         18 2016-11-19 13:47 /usr/feiy/flume-data/FlumeData.1479534415398[root@master ~]# hadoop fs -cat /usr/feiy/flume-data/FlumeData.14795344153985555555588888888[root@master ~]# hadoop fs -cat /usr/feiy/flume-data/FlumeData.1479534366317111111112222222244444444[root@master ~]# 



原创粉丝点击