Spark-再接着上次的Lamda架构

来源:互联网 发布:java工程师待遇怎么样 编辑:程序博客网 时间:2024/05/22 16:59

日志分析

单机日志分析,适用于小数据量的。(最大10G),awk/grep/sort/join等都是日志分析的利器。
例子:
1、shell得到Nginx日志中访问量最高的前十个IP

cat access.log.10 | awk '(a[$1]++) END (for(b in a) print b"\t"a[b])' | sort -k2 -r | head -n 10

2、python 统计每个IP的地址点击数

 import re import sys contents=sys.argv[1] def NginxIpHit(logfile_path):     ipadd = r'\.'.join([r'\d{1,3}']*4)     re_ip = re.compile(ipadd)     iphitlisting = {}     for line in open(contents):     match = re_ip.match(line)     if match:        ip = match.group()        iphitlisting[ip]=iphitlisting.get(ip,0)+1     print iphitlisting NginxIpHit(contents)

**大规模的日志处理,日志分析指标:
PV、UV、PUPV、漏斗模型和准化率、留存率、用户属性
最终用UI展示各个指标的信息。**

架构

这里写图片描述

  • 1、实时日志处理流线

数据采集:采用Flume NG进行数据采集
数据汇总和转发:用Flume 将数据转发和汇总到实时消息系统Kafka
数据处理:采用spark streming 进行实时的数据处理
结果显示:flask作为可视化工具进行结果显示

  • 2、离线日志处理流线

数据采集:通过Flume将数据转存到HDFS
数据处理:使用spark sql进行数据的预处理
结果呈现:结果汇总到mysql上,最后使用flask进行结果的展现
Lamda架构:低响应延迟的组合数据传输环境。
查询过程:一次流处理、一次批处理。对应着实时和离线处理。

项目流程

安装flume
Flume进行日志采集,web端的日志一般Nginx、IIS、Tomcat等。Tomcat的日志在var/data/log
安装jdk
安装Flume

wget http://mirrors.cnnic.cn/apache/flume/1.5.0/apache-flume-1.5.0-bin.tar.gztar –zxvf  apache-flume-1.5.0-bin.tar.gzmv apache-flume-1.5.0 –bin  apache-flume-1.5.0ln   -s  apache-flume-1.5.0   fiume 

环境变量配置

Vim  /etc/profile Export JAVA_HOME=/usr/local/jdkExport CLASS_PATH = .:$ JAVA_HOME/lib/dt.jar: $ JAVA_HOME/lib/tools.jarExport PATH=$ PATH:$ JAVA_HOME/binExport FlUME_HOME=/usr/local/flumeExport FlUME_CONF_DIR=$ FlUME_HOME/confExport PATH=$ PATH:$ FlUME_HOME /binSouce  /etc/profile 

创建agent配置文件将数据输出到hdfs上,修改flume.conf:

a1.sources = r1a1.sinks = k1a1.channels =c1描述和配置sources第一步:配置数据源a1.sources.r1.type =execa1.sources.r1.channels =c1配置需要监控的日志输出目录a1.sources.r1.command=tail  –f  /va/log/data第二步:配置数据输出a1.sink.k1.type =hdfsa1.sink.k1.channels =c1a1.sink.k1.hdfs.useLocalTimeStamp=truea1.sink.k1.hdfs.path =hdfs://192.168.11.177:9000/flume/events/%Y/%m/%d/%H/%Ma1.sink.k1.hdfs.filePrefix =cmcca1.sink.k1.hdfs.minBlockReplicas=1a1.sink.k1.hdfs.fileType =DataStreama1.sink.k1.hdfs.writeFormat=Texta1.sink.k1.hdfs.rollInterval =60a1.sink.k1.hdfs.rollSize =0a1.sink.k1.hdfs.rollCount=0a1.sink.k1.hdfs.idleTimeout =0配置数据通道a1.channels.c1.type=memorya1.channels.c1.capacity=1000a1.channels.c1.transactionCapacity=100第四步:将三者级联a1.souces.r1.channels =c1a1.sinks.k1.channel =c1

启动Flume Agent

cd  /usr/local/flumenohup bin/flume-ng  agent  –n  conf  -f  conf/flume-conf.properties&

已经将flume整合到了hdfs中

  • 整合Flume、kafka、hhdfs
#hdfs输出端a1.sink.k1.type =hdfsa1.sink.k1.channels =c1a1.sink.k1.hdfs.useLocalTimeStamp=truea1.sink.k1.hdfs.path =hdfs://192.168.11.174:9000/flume/events/%Y/%m/%d/%H/%Ma1.sink.k1.hdfs.filePrefix =cmcc-%Ha1.sink.k1.hdfs.minBlockReplicas=1a1.sink.k1.hdfs.fileType =DataStreama1.sink.k1.hdfs.rollInterval =3600a1.sink.k1.hdfs.rollSize =0a1.sink.k1.hdfs.rollCount=0a1.sink.k1.hdfs.idleTimeout =0#kafka输出端 为了提高性能使用内存通道a1.sink.k2.type =com.cmcc.chiwei.Kafka.CmccKafkaSinka1.sink.k2.channels =c2a1.sink.k2.metadata.broker.List=192.168.11.174:9002;192.168.11.175:9092; 192.168.11.174:9092a1.sink.k2.partion.key =0a1.sink.k2.partioner.class= com.cmcc.chiwei.Kafka.Cmcc Partiona1.sink.k2.serializer.class= kafka. Serializer.StringEncodera1.sink.k2.request.acks=0a1.sink.k2.cmcc.encoding=UTF-8a1.sink.k2.cmcc.topic.name=cmcca1.sink.k2.producer.type =asynca1.sink.k2.batchSize =100a1.sources.r1.selector.type=replicatinga1.sources = r1a1.sinks = k1 k2a1.channels =c1 c2#c1a1.channels.c1.type=filea1.channels.c1.checkpointDir=/home/flume/flumeCheckpointa1.channels.c1.dataDir=/home/flume/flumeData, /home/flume/flumeDataExta1.channels.c1.capacity=2000000a1.channels.c1.transactionCapacity=100#c2a1.channels.c2.type=memorya1.channels.c2.capacity=2000000a1.channels.c2.transactionCapacity=100

用Kafka将日志汇总

1.4 Tar –zxvf  kafka_2.10-0.8.1.1.tgz1.5 配置kafka和zookeeper文件配置zookeeper.propertiesdataDir=/tmp/zookeeperclient.Port=2181maxClientCnxns = 0initLimit = 5syncLimit = 2##server.43 = 10.190.182.43:2888:3888server.38 = 10.190.182.38:2888:3888server.33 = 10.190.182.33:2888:3888

配置zookeeper myid

在每个服务器dataDir 创建 myid文件 写入本机id//server.43   myid  本机编号43echo “43” >  /tmp/ zookeeper/myid配置kafka文件, config/server.properties每个节点根据不同主机名配置broker.id43host.name:10.190.172.43zookeeper.connect=10.190.172.43:2181, 10.190.172.33:218110.190.172.38:2181

启动zookeeper
kafka通过zookeeper存储元数据,先启动它,提供kafka相应的连接地址
Kafka自带的zookeeper

在每个节点 bin/zookeeper-server-start.sh config/zookeeper. properties
启动Kafka

Bin/Kafka-server-start.sh

创建和查看topic
Topic和flume中的要一致,spark streming 也用的这个

Bin/ Kafka-topics.sh  --create  --zookeeper 10.190.172.43:2181 --replication-factor  1  -- partions   1  --topic  KafkaTopic

查看下:

Bin/ Kafka-topics.sh   --describe   -- zookeeper  10.190.172.43:2181

整合kafka sparkstreming

Buid.sbtSpark-coreSpark-stremingSpark-streamng-kafkakafka
  • Spark streming 实时分析
    数据收集和中转已经好了,kafka给sparkstreming
  • Spark sql 离线分析
  • Flask可视化

代码

移步: github.com/jinhang

0 0
原创粉丝点击