log4j+flume+HDFS实现日志存储
来源:互联网 发布:网络中兔子是什么意思 编辑:程序博客网 时间:2024/05/16 08:19
- log4j 日志生成
- flume 日志收集系统,收集日志,使用版本apache-flume-1.6.0-bin.tar.gz .
- HDFS Hadoop分布式文件系统,存储日志,使用版本hadoop-3.0.0-alpha1.tar.gz
一、说明,部署在同一台虚拟机,虚拟机IP:10.34.11.65,hosts配置如下:
二、HDFS配置 hadoop安装配置戳这里
1、 hadoop解压到/opt/目录,在/opt/ hadoop-3.0.0-alpha1/下,使用伪分布式模式的操作方法。如果需要修改端口号,在core-site.xml中,将9000改为其它未被占用的端口号即可。
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property></configuration>etc/hadoop/hdfs-site.xml:<configuration> <property> <name>dfs.replication</name> <value>1</value> </property></configuration>
2、 创建目录,在HDFS上创建/flume目录
hadoop fs -mkdir /flume
hadoop fs -chown -R flume:flume flume
hadoop fs -chmod -R 777 /flume
三、flume配置 flume安装戳这里
1、 flume解压到/opt/目录,在 /opt/apache-flume-1.6.0-bin/conf 目录创建 f2.conf 文件,内容如下
agent-1.sources=source1agent-1.channels=channel1agent-1.sinks=sink1agent-1.sources.source1.type=avroagent-1.sources.source1.bind=localhostagent-1.sources.source1.port=44446agent-1.sources.source1.channels=channel1agent-1.channels.channel1.type=memoryagent-1.channels.channel1.capacity=10000agent-1.channels.channel1.transactionCapacity=1000agent-1.channels.channel1.keep-alive=30agent-1.sinks.sink1.type=hdfsagent-1.sinks.sink1.channel=channel1agent-1.sinks.sink1.hdfs.path=hdfs://localhost:9000/flumeagent-1.sinks.sink1.hdfs.fileType=DataStreamagent-1.sinks.sink1.hdfs.writeFormat=Textagent-1.sinks.sink1.hdfs.rollInterval=0agent-1.sinks.sink1.hdfs.rollSize=10240agent-1.sinks.sink1.hdfs.rollCount=0agent-1.sinks.sink1.hdfs.idleTimeout=60
flume-ng agent -c /opt/apache-flume-1.6.0-bin/conf -f /opt/apache-flume-1.6.0-bin/conf/f2.conf -Dflume.root.logger=INFO,console -n agent-1
四、log4j 项目源码下载
1、 启动flume后,,使用eclipse,创建一个maven工程,pom.xml如下:<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>flume-test</groupId> <artifactId>flume-test</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <dependencies> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.21</version> </dependency> <dependency> <groupId>org.apache.flume</groupId> <artifactId>flume-ng-core</artifactId> <version>1.6.0</version> </dependency> <dependency> <groupId>org.apache.flume.flume-ng-clients</groupId> <artifactId>flume-ng-log4jappender</artifactId> <version>1.6.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.6</version> <configuration> <warSourceDirectory>WebContent</warSourceDirectory> <failOnMissingWebXml>false</failOnMissingWebXml> </configuration> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.5</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build></project>
package com.flume;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import java.util.Date;/** * Created by ywchen on 2016-09-06. */public class WriteLog { protected static final Logger logger = LoggerFactory.getLogger(WriteLog.class); public static void main(String[] args) throws Exception { while (true) { logger.info(String.valueOf(new Date().getTime())); Thread.sleep(2000); } }}
3、 resources下log4j.properties 配置如下:
### set log levels ###log4j.rootLogger=INFO, stdout, file, flumelog4j.logger.per.flume=INFO### flume ###log4j.appender.flume=org.apache.flume.clients.log4jappender.Log4jAppenderlog4j.appender.flume.layout=org.apache.log4j.PatternLayoutlog4j.appender.flume.Hostname=localhostlog4j.appender.flume.Port=44446### stdout ###log4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.Threshold=INFOlog4j.appender.stdout.Target=System.outlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n### file ###log4j.appender.file=org.apache.log4j.DailyRollingFileAppenderlog4j.appender.file.Threshold=INFOlog4j.appender.file.File=./logs/tracker/tracker.loglog4j.appender.file.Append=truelog4j.appender.file.DatePattern='.'yyyy-MM-ddlog4j.appender.file.layout=org.apache.log4j.PatternLayoutlog4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n
#!/bin/shjarlist=$(ls /opt/apache-flume-1.6.0-bin/lib/*.jar)CLASSPATH=/opt/flume-test-0.0.1-SNAPSHOT.jarfor jar in ${jarlist}do CLASSPATH=${CLASSPATH}:${jar}doneecho ${CLASSPATH}java -classpath $CLASSPATH com.flume.WriteLog
mvn clean package
我这里生成的是flume-test-0.0.1-SNAPSHOT.jar ,然后启动shell
./run.sh
此时进入apache-flume-1.6.0-bin/logs/flume.log,可以看到我们程序打的日志了。
五、运行结果
1、 查看HDFS上相应目录下存储的文件
0 0
- log4j+flume+HDFS实现日志存储
- log4j+flume+HDFS实现日志存储
- log4j+flume+HDFS实现日志存储
- flume将log4j日志数据写入到hdfs
- log4j+flume+kafka实现日志收集
- flume学习(三):flume将log4j日志数据写入到hdfs
- flume学习(五):flume将log4j日志数据写入到hdfs
- flume学习(五):flume将log4j日志数据写入到hdfs
- Flume-ng在windows环境搭建并测试+log4j日志通过Flume输出到HDFS
- flume学习(二):flume将log4j日志数据写入到hdfs
- flume学习(三):flume将log4j日志数据写入到hdfs
- 5.Flume实时监控读取日志数据,存储hdfs文件系统
- flume log4j日志接收
- Flume接收Log4j日志
- flume+log4j收集日志
- log4j实现日志集中存储
- log4j实现日志集中存储
- Flume实现日志文件夹数据加载到HDFS
- 5个简单的技巧帮你在网上查找你的原创图片
- bzoj3261 最大异或和(可持久化字典树)
- GDI+ 图片颜色渐变
- unity执行顺序问题(如何再次执行start方法)
- 网易笔试题:二叉树
- log4j+flume+HDFS实现日志存储
- 新的篇章
- 模块化编程 自顶向下编程 自底向上编程 结构化编程 面向对象编程 面向过程编程
- javascript网页特效——window.opener
- 琪露诺的算数教室——解题报告
- python实现字符串全排列(注:每个字符写在了列表里)
- uva11456——Trainsorting(LIS)
- Linux下配置 Keepalived(心跳检测部署)
- 添加阴影