CentOS 7.0+flume类似日志输出到HDFS
来源:互联网 发布:智百威软件官网 编辑:程序博客网 时间:2024/05/29 17:52
环境
CentOS 7.0
hadoop 2.7.3 CentOS 7.0+hadoop 2.7搭建集群
flume 1.8.0 CentOS 7.0安装flume
下载、安装及部分配置
见上一篇博客: CentOS 7.0安装flume
配置
添加hadoop目录
hadoop fs -mkdir -p /test/flume/output
添加本地日志文件
cd /tmptouch flume_test.logecho 'hello world' >> flume_test.log
修改flume-conf
cp flume-conf.properties flume-conf2.propertiesvi flume-conf2.properties
配置HDFS
a1.sources=r1a1.channels=c1a1.sinks=k1a1.sources.r1.type=execa1.sources.r1.channels=c1a1.sources.r1.command=tail -f /tmp/flume_test.loga1.sinks.k1.type=hdfsa1.sinks.k1.hdfs.path=hdfs://master:9000/test/flume/outputa1.sinks.k1.hdfs.fileType = DataStreama1.sinks.k1.hdfs.writeFormat=Texta1.sinks.k1.hdfs.maxOpenFiles = 1a1.sinks.k1.hdfs.rollCount = 0a1.sinks.k1.hdfs.rollInterval = 0a1.sinks.k1.hdfs.rollSize = 1000000a1.sinks.k1.hdfs.batchSize = 10000a1.sinks.k1.channel=c1a1.channels.c1.type=memorya1.channels.c1.capacity=1000a1.channels.c1.transcationCapacity=100
启动
./bin/flume-ng agent --conf ./conf --conf-file ./conf/flume-conf2.properties --name a1 -Dflume.root.logger=INFO,console
输出:
2017-12-05 17:03:24,445 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:62)] Configuration provider starting2017-12-05 17:03:24,456 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:134)] Reloading configuration file:./conf/flume-conf2.properties2017-12-05 17:03:24,470 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)] Added sinks: k1 Agent: a12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,471 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k12017-12-05 17:03:24,646 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration for agents: [a1]2017-12-05 17:03:24,646 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:147)] Creating channels2017-12-05 17:03:24,657 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel c1 type memory2017-12-05 17:03:24,686 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:201)] Created channel c12017-12-05 17:03:24,687 (conf-file-poller-0) [INFO - org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:41)] Creating instance of source r1, type exec2017-12-05 17:03:24,705 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: k1, type: hdfs2017-12-05 17:03:24,756 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:116)] Channel c1 connected to [r1, k1]2017-12-05 17:03:24,836 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:137)] Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@5b74db2 counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }2017-12-05 17:03:24,868 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:144)] Starting Channel c12017-12-05 17:03:24,871 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:159)] Waiting for channel: c1 to start. Sleeping for 500 ms2017-12-05 17:03:24,874 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.2017-12-05 17:03:24,874 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: CHANNEL, name: c1 started2017-12-05 17:03:25,372 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:171)] Starting Sink k12017-12-05 17:03:25,374 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:182)] Starting Source r12017-12-05 17:03:25,376 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.source.ExecSource.start(ExecSource.java:168)] Exec source starting with command: tail -f /tmp/flume_test.log2017-12-05 17:03:25,378 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.2017-12-05 17:03:25,378 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SOURCE, name: r1 started2017-12-05 17:03:25,389 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.2017-12-05 17:03:25,389 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SINK, name: k1 started2017-12-05 17:03:29,432 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:57)] Serializer = TEXT, UseRawLocalFileSystem = false2017-12-05 17:03:29,879 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:251)] Creating hdfs://master:9000/test/flume/output/FlumeData.1512464609433.tmp
可以观察到,在hdfs创建了FlumeData.1512464609433.tmp文件
测试
测试一
查看hdfs文件内容
hadoop fs -cat /test/flume/output/FlumeData.1512464609433.tmp
输出:
hello world
测试二
向本地日志文件输入内容
cd /tmpecho 'hello world2' >> flume_test.logecho 'hello world3' >> flume_test.logecho 'hello world4' >> flume_test.logecho 'hello world5' >> flume_test.logecho 'hello world6' >> flume_test.log
查看本地日志文件内容
cat flume_test.log# 输出hello worldhello world2hello world3hello world4hello world5hello world6
查看hdfs文件内容
hadoop fs -cat /test/flume/output/FlumeData.1512464609433.tmp# 输出hello worldhello world2hello world3hello world4hello world5hello world6
测试成功
至此,类似日志输出到HDFS的小功能已经实现
阅读全文
0 0
- CentOS 7.0+flume类似日志输出到HDFS
- CentOS 7.0+flume+kafka类似日志输出存储
- flume 收集日志到HDFS
- Flume-ng在windows环境搭建并测试+log4j日志通过Flume输出到HDFS
- flume入门 log4j 输出日志到flume
- Flume 日志收集、使用Flume收集日志到HDFS
- log4j输出日志到flume
- log4j输出日志到flume
- flume的导日志数据到hdfs
- Flume 收集Nginx日志到Hdfs Tail-to-hdfs sink
- Flume 收集Nginx日志到Hdfs Tail-to-hdfs sink
- flume学习(一):log4j直接输出日志到flume
- flume学习(一):log4j直接输出日志到flume
- flume学习(一):log4j直接输出日志到flume
- flume学习(一):log4j直接输出日志到flume
- flume学习(一):log4j直接输出日志到flume
- flume学习(一):log4j直接输出日志到flume
- flume监控spoolDir日志到HDFS(从日志产生到hdfs上一整套)
- idea中依赖RxJava的Lambda表达式,线程池的创建
- 关于linux 下的php symfony环境配置
- 新手php心得之用explode函数完成上级节点相关功能
- 链表中环的入口节点
- 使用退出标志终止线程时,终止方法里使用类名和this的区别
- CentOS 7.0+flume类似日志输出到HDFS
- 西瓜书《机器学习》课后答案——chapter14
- fedora安装tftp server
- windows查看默认编码类型
- 关于Halcon基于特征点图像拼接的算子实现
- 设计模式的艺术之道--原型模式
- Android4.4高通平台中的应用路径及应用裁剪方法
- getComputedStyle 使用方法
- 通过getSystemServices获取手机管理大全