flume集群搭建

来源:互联网 发布:怎样在淘宝网上开店铺 编辑:程序博客网 时间:2024/05/17 00:59

数据采集端:
source:使用spooldir扫描文件获取资源
channel:memory
sink:avro sink

数据接收端:
source:avro sink
channel:memory
sink:logger sink

参考:
http://www.xuebuyuan.com/2142003.html

最近使用Flume1.4 做日志收集,分享一下具体的集群环境配置搭建。其中使用到了3台机器, hadoop  192.168.80.100   hadoop1  192.168.80.101    hadoop2   192.168.80.102 ,  将 hadoop  和 hadoop2 机器上面指定的flume 监控到的文件夹中产生的日志文件通过 agent 汇集到 hadoop1 机器上面最终写入到 hdfs 中。分别在三台机器上面安装 flume 1.4.0 , 目录位于 /usr/local/ 下面, 三台机器都一样, 唯一不同的是三台机器中 /usr/local/flume/conf/ 目录下面的配置文件不同。[root@hadoop flume]# cd conf/[root@hadoop conf]# lsflume-env.sh.template  flume-master  log4j.properties将机器hadoop中  flume 配置文件重命名为 flume-master ; 并修改里面内容为agent.sources = source1agent.channels = memoryChannelagent.sinks = sink1agent.sources.source1.type = spooldiragent.sources.source1.spoolDir=/root/hmbbsagent.sources.source1.channels = memoryChannel# Each sink's type must be defined#Specify the channel the sink should use# Each channel's type is defined.agent.channels.memoryChannel.type = memory# Other config values specific to each type of channel(sink or source)# can be defined as well# In this case, it specifies the capacity of the memory channelagent.channels.memoryChannel.capacity = 1000agent.channels.memoryChannel.keep-alive = 1000agent.channels.memoryChannel.type=fileagent.sinks.sink1.type = avroagent.sinks.sink1.hostname = hadoop1agent.sinks.sink1.port = 23004agent.sinks.sink1.channel = memoryChannel将机器hadoop1中  flume 配置文件重命名为 flume-node ; 并修改里面内容为agent.sources = source1agent.channels = memoryChannelagent.sinks = sink1# For each one of the sources, the type is definedagent.sources.source1.type = avroagent.sources.source1.bind = hadoop1agent.sources.source1.port = 23004agent.sources.source1.channels = memoryChannel# Each sink's type must be defined#agent.sinks.loggerSink.channel = memoryChannel# Each channel's type is defined.agent.channels.memoryChannel.type = memory# Other config values specific to each type of channel(sink or source)# can be defined as well# In this case, it specifies the capacity of the memory channelagent.channels.memoryChannel.capacity = 100agent.channels.memoryChannel.keep-alive = 100agent.sinks.sink1.type=hdfsagent.sinks.sink1.hdfs.path=hdfs://hadoop:9000/hmbbs/%y-%m-%d/%H%M%Sagent.sinks.sink1.hdfs.fileType=DataStreamagent.sinks.sink1.hdfs.writeFormat=TEXTagent.sinks.sink1.hdfs.round=trueagent.sinks.sink1.hdfs.roundValue=5agent.sinks.sink1.hdfs.roundUnit=minute#agent.sinks.sink1.hdfs.rollInterval=1agent.sinks.sink1.hdfs.useLocalTimeStamp=trueagent.sinks.sink1.channel = memoryChannelagent.sinks.sink1.hdfs.filePrefix=events-将机器hadoop2中  flume 配置文件重命名为 flume-master ; 并修改里面内容为agent.sources = source1agent.channels = memoryChannelagent.sinks = sink1agent.sources.source1.type = spooldiragent.sources.source1.spoolDir=/root/hmbbsagent.sources.source1.channels = memoryChannel# Each sink's type must be defined#Specify the channel the sink should use# Each channel's type is defined.agent.channels.memoryChannel.type = memory# Other config values specific to each type of channel(sink or source)# can be defined as well# In this case, it specifies the capacity of the memory channelagent.channels.memoryChannel.capacity = 1000agent.channels.memoryChannel.keep-alive = 1000agent.channels.memoryChannel.type=fileagent.sinks.sink1.type = avroagent.sinks.sink1.hostname = hadoop1agent.sinks.sink1.port = 23004agent.sinks.sink1.channel = memoryChannel将以上环境配置好了之后, 就可以依次启动 每台机器上面的flume 监控程序了,记得进入当前机器flume 目录下面  , 本人测试进入的目录是   cd   /usr/local/flume    顺序为  一定要先启动 hadoop1 节点的 flume ,不然会报异常!并且确保hadoop 是启动的。 hadoop1              bin/flume-ng agent -n agent -c conf -f conf/flume-node -Dflume.root.logger=DEBUG,console hadoop                bin/flume-ng agent -n agent -c conf -f conf/flume-master -Dflume.root.logger=DEBUG,console hadoop2              bin/flume-ng agent -n agent -c conf -f conf/flume-master -Dflume.root.logger=DEBUG,console 3台机器服务都启动之后, 可以往 hadoop 或者hadoop2 机器中的 /root/hmbbs  文件夹下面 加入新的 文件, flume 集群 会将这些数据 都收集到 hadoop1 上面 然后写到hdfs 中。
0 0
原创粉丝点击