基于Flume+Kafka+Spark-Streaming的实时流式处理完整流程

来源:互联网 发布:轩辕剑光翼进阶数据 编辑:程序博客网 时间:2024/05/17 04:33

基于Flume+Kafka+Spark-Streaming的实时流式处理完整流程


1、环境准备,四台测试服务器

spark集群三台,spark1,spark2,spark3

kafka集群三台,spark1,spark2,spark3

zookeeper集群三台,spark1,spark2,spark3

日志接收服务器, spark1

日志收集服务器,redis (这台机器用来做redis开发的,现在用来做日志收集的测试,主机名就不改了)


日志收集流程:

日志收集服务器->日志接收服务器->kafka集群->spark集群处理

说明: 日志收集服务器,在实际生产中很有可能是应用系统服务器,日志接收服务器为大数据服务器中一台,日志通过网络传输到日志接收服务器,再入集群处理。

因为,生产环境中,往往网络只是单向开放给某台服务器的某个端口访问的。


Flume版本: apache-flume-1.5.0-cdh5.4.9 ,该版本已经较好地集成了对kafka的支持


2、日志收集服务器(汇总端)

配置flume动态收集特定的日志,collect.conf  配置如下:

# Name the components on this agenta1.sources = tailsource-1a1.sinks = remotesinka1.channels = memoryChnanel-1# Describe/configure the sourcea1.sources.tailsource-1.type = execa1.sources.tailsource-1.command = tail -F /opt/modules/tmpdata/logs/1.loga1.sources.tailsource-1.channels = memoryChnanel-1# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.memoryChnanel-1.type = memorya1.channels.memoryChnanel-1.keep-alive = 10a1.channels.memoryChnanel-1.capacity = 100000a1.channels.memoryChnanel-1.transactionCapacity = 100000# Bind the source and sink to the channela1.sinks.remotesink.type = avroa1.sinks.remotesink.hostname = spark1a1.sinks.remotesink.port = 666a1.sinks.remotesink.channel = memoryChnanel-1

日志实时监控日志后,通过网络avro类型,传输到spark1服务器的666端口上

启动日志收集端脚本:

bin/flume-ng agent --conf conf --conf-file conf/collect.conf --name a1 -Dflume.root.logger=INFO,console


3、日志接收服务器

配置flume实时接收日志,collect.conf  配置如下:

#agent section  producer.sources = s  producer.channels = c  producer.sinks = r    #source section  producer.sources.s.type = avroproducer.sources.s.bind = spark1producer.sources.s.port = 666producer.sources.s.channels = c    # Each sink's type must be defined  producer.sinks.r.type = org.apache.flume.sink.kafka.KafkaSinkproducer.sinks.r.topic = mytopicproducer.sinks.r.brokerList = spark1:9092,spark2:9092,spark3:9092producer.sinks.r.requiredAcks = 1producer.sinks.r.batchSize = 20producer.sinks.r.channel = c1 #Specify the channel the sink should use  producer.sinks.r.channel = c    # Each channel's type is defined.  producer.channels.c.type   = org.apache.flume.channel.kafka.KafkaChannelproducer.channels.c.capacity = 10000producer.channels.c.transactionCapacity = 1000producer.channels.c.brokerList=spark1:9092,spark2:9092,spark3:9092producer.channels.c.topic=channel1producer.channels.c.zookeeperConnect=spark1:2181,spark2:2181,spark3:2181


关键是指定如源为接收网络端口的666来的数据,并输入kafka的集群,需配置好topic及zk的地址

启动接收端脚本:

bin/flume-ng agent --conf conf --conf-file conf/receive.conf --name producer -Dflume.root.logger=INFO,console


4、spark集群处理接收数据

import org.apache.spark.SparkConfimport org.apache.spark.SparkContextimport org.apache.spark.streaming.kafka.KafkaUtilsimport org.apache.spark.streaming.Secondsimport org.apache.spark.streaming.StreamingContextimport kafka.serializer.StringDecoderimport scala.collection.immutable.HashMapimport org.apache.log4j.Levelimport org.apache.log4j.Logger/** * @author Administrator */object KafkaDataTest {  def main(args: Array[String]): Unit = {    Logger.getLogger("org.apache.spark").setLevel(Level.WARN);    Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.ERROR);    val conf = new SparkConf().setAppName("stocker").setMaster("local[2]")    val sc = new SparkContext(conf)    val ssc = new StreamingContext(sc, Seconds(1))    // Kafka configurations    val topics = Set("mytopic")    val brokers = "spark1:9092,spark2:9092,spark3:9092"    val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers, "serializer.class" -> "kafka.serializer.StringEncoder")    // Create a direct stream    val kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics)    val urlClickLogPairsDStream = kafkaStream.flatMap(_._2.split(" ")).map((_, 1))    val urlClickCountDaysDStream = urlClickLogPairsDStream.reduceByKeyAndWindow(      (v1: Int, v2: Int) => {        v1 + v2      },      Seconds(60),      Seconds(5));    urlClickCountDaysDStream.print();    ssc.start()    ssc.awaitTermination()  }}

spark-streaming接收到kafka集群后的数据,每5s计算60s内的wordcount值


5、测试结果


往日志中依次追加三次日志

spark-streaming处理结果如下:

(hive,1)
(spark,2)
(hadoop,2)
(storm,1)

---------------------------------------

(hive,1)
(spark,3)
(hadoop,3)
(storm,1)

---------------------------------------

(hive,2)
(spark,5)
(hadoop,5)
(storm,2)

与预期一样,充分体现了spark-streaming滑动窗口的特性

2 1
原创粉丝点击