Flume的安装部署
来源:互联网 发布:java连接ssh的jar包 编辑:程序博客网 时间:2024/06/06 07:28
1.上传并解压
apache-flume-1.6.0-bin.tar.gz
tar -zxvf apache-flume-1.6.0-bin.tar.gz -C apps/
2.配置文件
cd /home/hadoop/apps/apache-flume-1.6.0-bin/conf
vi spool-logger.conf
内容为:
# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the source#监听目录,spoolDir指定目录, fileHeader要不要给文件夹前坠名a1.sources.r1.type = spooldira1.sources.r1.spoolDir = /home/hadoop/flumespoola1.sources.r1.fileHeader = true# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c13.新建目录
mkdir /home/hadoop/flumespool
4启动
cd /home/hadoop/apps/apache-flume-1.6.0-bin
然后 bin/flume-ng agent -c ./conf -f ./conf/spool-logger.conf -n a1 -Dflume.root.logger=INFO,console
结果截图:
5测试
测试: 往/home/hadoop/flumeSpool放文件(mv ././xxxFile /home/hadoop/flumeSpool),但是不要在里面生成文件
当你对一个文件往flumespool放二次的时候就会报错
6.另外一种配置方式:
vi tail-hdfs.conf
文件内容
# Name the components on this agenta1.sources = r1a1.sinks = k1a1.channels = c1# Describe/configure the sourcea1.sources.r1.type = execa1.sources.r1.command = tail -F /home/hadoop/log/test.loga1.sources.r1.channels = c1# Describe the sinka1.sinks.k1.type = hdfsa1.sinks.k1.channel = c1a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/a1.sinks.k1.hdfs.filePrefix = events-a1.sinks.k1.hdfs.round = truea1.sinks.k1.hdfs.roundValue = 10a1.sinks.k1.hdfs.roundUnit = minutea1.sinks.k1.hdfs.rollInterval = 3a1.sinks.k1.hdfs.rollSize = 20a1.sinks.k1.hdfs.rollCount = 5a1.sinks.k1.hdfs.batchSize = 1a1.sinks.k1.hdfs.useLocalTimeStamp = true#生成的文件类型,默认是Sequencefile,可用DataStream,则为普通文本a1.sinks.k1.hdfs.fileType = DataStream# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sinks.k1.channel = c1
新建文件:cd ~
mkdir /home/hadoop/log
touch /home/hadoop/log/test.log
启动:
用tail命令获取数据,下沉到hdfs
启动命令:
bin/flume-ng agent -c conf -f conf/tail-hdfs.conf -n a1测试是否成功:
在开一下新的窗口
hadoop fs -ls /flume
阅读全文
0 0
- Flume的安装部署
- Flume的安装部署
- Flume安装部署
- flume安装部署
- Flume环境搭建、安装、部署
- Flume安装及简单部署
- Flume安装及简单部署
- Flume1.5.0入门:安装、部署、及flume的案例
- Flume1.5.0入门:安装、部署、及flume的案例
- Flume日志采集系统的安装和部署
- Flume1.5.0入门:安装、部署、及flume的案例
- Flume1.5.0入门:安装、部署、及flume的案例
- Flume学习8_Flume1.5.0入门:安装、部署、及flume的案例
- Flume的部署与测试
- 新版flume+kafka+storm安装部署
- Flume安装部署(两台机器)
- 新版flume+kafka+storm安装部署
- [Flume]安装,部署与应用案例
- 在Mac上安装MongoDB
- Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'void android.widget.But
- String核心方法总结和归纳!
- 8.17 (军训DAY 1)
- 目前拥有的资源
- Flume的安装部署
- HDU 2492
- 【转】一文读懂数据分析平台的架构与设计
- MxNet使用总览
- 判定一棵二叉树是否是二叉搜索树
- SpringBoot整合Redis
- HDU 4771 Stealing Harry Potter's Precious
- CSS3有哪些新增的选择器?
- ACdream 1417