flume-kafka整合--实时日志采集
来源:互联网 发布:java怎么写代码 编辑:程序博客网 时间:2024/06/04 20:07
flume采用架构
exec-source + memory-channel + avro-sink
avro-source + memory-channel + kafka-sink
kafka采用架构
启动zookeeper
zkServer.sh start
启动kafka(启动一个“篮子”)
kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties &
模拟
启动一个kafka消费者监听日志文件
kafka-console-consumer.sh –zookeeper 192.168.145.128:2181 –from-beginning –topic firstTopic
向日志文件中添加数据
echo hi,flume-kafka framework >> flume-kafka.txt
exec-memory-avro.conf
flume-ng agent \ --name exec-memory-avro \ --conf $FLUME_HOME/conf \ --conf-file $FLUME_HOME/conf/exec-memory-avro.conf \ -Dflume.root.logger=INFO,console # example exec-memory-avroexec-memory-avro.sources = exec-sourceexec-memory-avro.sinks = avro-sinkexec-memory-avro.channels = memory-channel# Describe/configure the sourceexec-memory-avro.sources.exec-source.type = execexec-memory-avro.sources.exec-source.command = tail -F /root/data/flume-kafka.txtexec-memory-avro.sources.exec-source.shell = /bin/sh -c# Describe/ the sinkexec-memory-avro.sinks.avro-sink.type = avroexec-memory-avro.sinks.avro-sink.hostname = 192.168.145.128exec-memory-avro.sinks.avro-sink.port = 44444 # Use a channel which buffers events in memoryexec-memory-avro.channels.memory-channel.type = memory# Bind the source and sink to the channelexec-memory-avro.sources.exec-source.channels = memory-channelexec-memory-avro.sinks.avro-sink.channel = memory-channel
avro-memory-kafka.conf
flume-ng agent \ --name avro-memory-kafka \ --conf $FLUME_HOME/conf \ --conf-file $FLUME_HOME/conf/avro-memory-kafka.conf \ -Dflume.root.logger=INFO,console # example avro-memory-kafkaavro-memory-kafka.sources = avro-sourceavro-memory-kafka.sinks = kafka-sinkavro-memory-kafka.channels = memory-channel# Describe/configure the sourceavro-memory-kafka.sources.avro-source.type = avroavro-memory-kafka.sources.avro-source.bind = 192.168.145.128avro-memory-kafka.sources.avro-source.port = 44444# Describe/ the sinkavro-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSinkavro-memory-kafka.sinks.kafka-sink.brokerList = 192.168.145.128:9092avro-memory-kafka.sinks.kafka-sink.topic = firstTopicavro-memory-kafka.sinks.kafka-sink.batchSize = 3avro-memory-kafka.sinks.kafka-sink.requiredAcks = 1# Use a channel which buffers events in memoryavro-memory-kafka.channels.memory-channel.type = memory# Bind the source and sink to the channelavro-memory-kafka.sources.avro-source.channels = memory-channelavro-memory-kafka.sinks.kafka-sink.channel = memory-channel
阅读全文
0 0
- flume-kafka整合--实时日志采集
- [日志处理工作之一]整合elasticsearch,kibana,flume-ng,kafka实时采集tomcat日志
- Flume和Kafka的整合完成实时数据采集
- flume采集log4j日志到kafka
- Flume与Kafka整合完成实时数据处理
- 实时日志流系统(kafka-flume-hdfs)
- flume实时收集日志到kafka
- logstash+kafka进行日志的实时采集
- Flume和Kafka完成实时数据的采集
- flume读取日志数据写入kafka 然后kafka+storm整合
- 日志采集系统比较:scribe、chukwa、kafka、flume比较
- 开源日志采集系统比较:scribe、chukwa、kafka、flume
- 开源日志采集系统比较:scribe、chukwa、kafka、flume
- 基于Flume+Log4j+Kafka的日志采集架构方案
- 基于Flume+Log4j+Kafka的日志采集架构方案
- Flume实时采集数据
- Flume 从入门到实时日志采集实例
- Flume+Kafka+Storm的实时日志统计
- 解决Python第三方库error: Unable to find vcvarsall.bat
- request.getParameter("id").trim();
- 使用navicat premium导入数据时候,可能出现导入异常
- docker的安装
- des对称加密和解密
- flume-kafka整合--实时日志采集
- 使用Knockout实现全选时遇到的问题
- String,StringBuilder和StringBuffer在保存数据方面的区别和原因
- 转换成Date形式
- android触摸事件分发与处理简述
- EasyUI日期格式化
- centos部署postgresql
- Win7删除GRUB For DOS启动项
- 第12周项目4- 拓扑排序算法验证