Flume 收集Nginx日志到Hdfs Tail-to-hdfs sink

来源:互联网 发布:淘宝网服饰女毛衣 编辑:程序博客网 时间:2024/05/19 13:57

转载URL:

http://blog.csdn.net/luyee2010/article/details/22159445


nginx,access.log日志约8000条/s,每100w条数据约253M,需要2min
agent1.sources = source1agent1.sinks = sink1agent1.channels = channel1# Describe/configure spooldir source1#agent1.sources.source1.type = spooldir#agent1.sources.source1.spoolDir = /var/log/apache/flumeSpool1#agent1.sources.source1.fileHeader = true# Describe/configure tail -F source1agent1.sources.source1.type = exec agent1.sources.source1.command = tail -F /tmp/log.logagent1.sources.source1.channels = channel1# Describe/configure nc source1#agent1.sources.source1.type = netcat#agent1.sources.source1.bind = localhost#agent1.sources.source1.port = 44444#configure host for sourceagent1.sources.source1.interceptors = i1agent1.sources.source1.interceptors.i1.type = hostagent1.sources.source1.interceptors.i1.hostHeader = hostname# Describe sink1#agent1.sinks.sink1.type = loggeragent1.sinks.sink1.type = hdfs#a1.sinks.k1.channel = c1#agent1.sinks.sink1.hdfs.path =hdfs://xxx:9000/tmp/tail/%y-%m-%d/%H%M%Sagent1.sinks.sink1.hdfs.path =hdfs://xxx:9000/tmp/tail/%y-%m-%d/%Hagent1.sinks.sink1.hdfs.filePrefix = %{hostname}/events-agent1.sinks.sink1.hdfs.maxOpenFiles = 5000 agent1.sinks.sink1.hdfs.batchSize= 500agent1.sinks.sink1.hdfs.fileType = DataStreamagent1.sinks.sink1.hdfs.writeFormat =Textagent1.sinks.sink1.hdfs.rollSize = 0agent1.sinks.sink1.hdfs.rollCount = 1000000agent1.sinks.sink1.hdfs.rollInterval = 600#agent1.sinks.sink1.hdfs.round = true#agent1.sinks.sink1.hdfs.roundValue = 10#agent1.sinks.sink1.hdfs.roundUnit = minuteagent1.sinks.sink1.hdfs.useLocalTimeStamp = true# Use a channel which buffers events in memoryagent1.channels.channel1.type = memoryagent1.channels.channel1.keep-alive = 120agent1.channels.channel1.capacity = 500000agent1.channels.channel1.transactionCapacity = 600# Bind the source and sink to the channelagent1.sources.source1.channels = channel1agent1.sinks.sink1.channel = channel1



0 0
原创粉丝点击