flume1.7.0 常用配置
来源:互联网 发布:php字符串加密解密 编辑:程序博客网 时间:2024/05/16 07:08
一、source 为spooling dir
a1.sources = r1a1.channels = c1a1.sinks = k1a1.sources.r1.type = spooldira1.sources.r1.spoolDir = /opt/nginx/testa1.sources.r1.channels = c1a1.sources.r1.fileSuffix = .proa1.sources.r1.includePattern = ^.*$a1.channels.c1.type = memorya1.channels.c1.capacity = 6000a1.channels.c1.transactionCapacity = 100a1.sinks.k1.type = com.sohu.tv.flume.sink.TestSinka1.sinks.k1.channel = c1a1.sinks.k1.kafka.topic = testa1.sinks.k1.kafka.batchsize = 1a1.sinks.k1.serializer.class = kafka.serializer.StringEncodera1.sinks.k1.key.serializer = org.apache.kafka.common.serialization.StringSerializera1.sinks.k1.value.serializer = org.apache.kafka.common.serialization.StringSerializera1.sinks.k1.compression.codec = 2a1.sinks.k1.bootstrap.servers=10.10.*.201:9092#a1.sinks.k1.metadata.broker.list=10.10.*.194:9092,10.10.*.204:9092a1.sinks.k1.request.required.acks=1a1.sinks.k1.producer.type=asynca1.sinks.k1.queue.buffering.max.messages=1a1.sinks.k1.batch.num.messages=1
二、sink为hdfs
a1.sources = r1a1.channels = c2a1.sinks = k2a1.sources.r1.type = execa1.sources.r1.command = tail -n +0 -F /opt/nginx/logs/link_pt.loga1.sources.r1.channels = c2a1.sources.r1.batchSize = 200#a1.sources.r1.useHost = true#batch代表一次向通道提交的量,或者一次从通道取的量#a1.channels.c2.type = memory#a1.channels.c2.capacity = 10000 通道的容量#a1.channels.c2.keep-alive = 6#a1.channels.c2.transactionCapacity = 1000 事务提交容量,与batchsize大小差不多,大于等于batcha1.channels.c2.type = org.apache.flume.channel.kafka.KafkaChannela1.channels.c2.capacity = 10000a1.channels.c2.brokerList = 10.10.*.162:9092,10.10.*.104:9092,10.10.*.108:9092a1.channels.c2.topic = adlog_channela1.channels.c2.groupId = adlog88161a1.channels.c2.zookeeperConnect = 10.10.*.194:2181,10.10.*.204:2181,10.10.*.125:2181a1.channels.c2.transactionCapacity = 1000a1.sinks.k2.type = hdfsa1.sinks.k2.channel = c2a1.sinks.k2.hdfs.path = hdfs://buffercluster1/user/hive/warehouse/adlog_text/%Y%m%d/%Y%m%d%Ha1.sinks.k2.hdfs.useLocalTimeStamp = truea1.sinks.k2.hdfs.filePrefix = adlogaccess_-%Y%m%d%H#a1.sinks.k2.hdfs.filePrefix = pvaccess-a1.sinks.k2.hdfs.fileType = DataStreama1.sinks.k2.hdfs.writeFormat = Texta1.sinks.k2.hdfs.round = truea1.sinks.k2.hdfs.roundValue = 1a1.sinks.k2.hdfs.roundUnit = houra1.sinks.k2.hdfs.callTimeout = 30000a1.sinks.k2.hdfs.rollInterval = 3600a1.sinks.k2.hdfs.rollSize = 0a1.sinks.k2.hdfs.rollCount = 0
三、sink为kafka
a1.sources = r1a1.channels = c1a1.sinks = k1a1.sources.r1.type = execa1.sources.r1.command = tail -n +0 -F /opt/log/link_vv.loga1.sources.r1.channels = c1a1.channels.c1.type = memorya1.channels.c1.capacity = 10000a1.channels.c1.transactionCapacity = 1000a1.sinks.k1.type = KafkaSinka1.sinks.k1.channel = c1a1.sinks.k1.kafka.topic = topic1,topic2a1.sinks.k1.kafka.batchsize = 300a1.sinks.k1.serializer.class = kafka.serializer.StringEncodera1.sinks.k1.compression.codec = 2a1.sinks.k1.metadata.broker.list=10.10.*.51:9092,10.10.*.52:9092,10.10.*.53:9092a1.sinks.k1.request.required.acks=1a1.sinks.k1.producer.type=asynca1.sinks.k1.queue.buffering.max.messages=2000a1.sinks.k1.batch.num.messages=1000
1 0
- flume1.7.0 常用配置
- flume1.6.0安装配置
- Flume1.3.1配置-日志节点
- Flume1.3.1配置-汇聚节点
- Flume1.7.0的TaildirSource介绍
- Flume1.7.0用户手册(一)
- Flume1.8安装配置与入门实例
- flume1.7.0-taildirSource 支持多文件监控和断点续传
- Flume1.7.0(CDH1.6.0版本)新功能SpoolingDir日志采集
- Flume1.7.0+Elasticsearch1.7.5+Kibana4.11日志收集分析系统环境搭建
- 安装flume1.5
- flume1.6.0源码编译
- flume1.6.0的安装
- flume1.6 avro测试
- flume1.6.0安装
- flume1.6.0安装
- Flume1.4学习问题记录
- flume1.5.0单点安装全过程
- Django+MySQL安装配置详解(Linux)
- cocos2dx 3.x引擎升级,lua绑定genbindings.py运行错误问题
- gradle的一些问题
- HttpClient使用详解
- node.js 后台运行方法
- flume1.7.0 常用配置
- 博弈习题集之进阶
- undistortPoints()函数用法总结
- JAVA多态机制中,成员调用的特点
- Ubuntu环境下OpenGrok的安装及使用
- hog特征总结
- win下GIT GUI的使用教程
- spring拦截器、与filter的区别
- 注册机激活Navicat