Spark streaming 初探

来源:互联网 发布:如何入侵centos系统 编辑:程序博客网 时间:2024/05/21 23:00

Spark streaming 初探 

一、原理和运行场景

Spark Streaming运行原理图如下:



Spark Streaming把Kafka、HDFS、Socket等系统作为流处理的数据来源,把输入的数据流用时间切片的方式把数据分为一个个小的Batch,然后把这些小的Batch交给Spark引擎处理。


官方wordcount示例代码:

package org.apache.spark.examples.streamingimport org.apache.spark.SparkConfimport org.apache.spark.streaming.{Seconds, StreamingContext}import org.apache.spark.storage.StorageLevel/** * Counts words in UTF8 encoded, '\n' delimited text received from the network every second. * * Usage: NetworkWordCount <hostname> <port> * <hostname> and <port> describe the TCP server that Spark Streaming would connect to receive data. * * To run this on your local machine, you need to first run a Netcat server *    `$ nc -lk 9999` * and then run the example *    `$ bin/run-example org.apache.spark.examples.streaming.NetworkWordCount localhost 9999` */object NetworkWordCount {  def main(args: Array[String]) {    if (args.length < 2) {      System.err.println("Usage: NetworkWordCount <hostname> <port>")      System.exit(1)    }    StreamingExamples.setStreamingLogLevels()    // Create the context with a 1 second batch size    val sparkConf = new SparkConf().setAppName("NetworkWordCount")    val ssc = new StreamingContext(sparkConf, Seconds(1))    // Create a socket stream on target ip:port and count the    // words in input stream of \n delimited text (eg. generated by 'nc')    // Note that no duplication in storage level only for running locally.    // Replication necessary in distributed scenario for fault tolerance.    val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)    val words = lines.flatMap(_.split(" "))    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)    wordCounts.print()    ssc.start()    ssc.awaitTermination()  }}

    以上代码首先创建StreamContext对象,然后以1秒钟的时间间隔来进行Streaming流的Batchi划分。与此同时,通过socket的方式监听数据流,然后进行每秒的单词技术操作。下图清晰描述了运行流程:



二、Spark Streaming运行场景

2.1无状态操作

         每次操作都只是计算当前时间切片的内容,如:每次只计算1秒钟时间切片中产生的数据的RDD

2.2有状态操作:updateStateByKey

         不断地把当前和历史的时间切片的RDD累加计算,随着时间的流逝,计算的数据规模越来越大。

2.3window操作

         针对特定时间段并以特定时间间隔为单位进行的滑动操作。例如:以1秒钟为时间切片的情况下,统计最近10分钟内Streaming产生的数据,并且每隔2分钟进行一次更新。


三、参考链接

https://spark.apache.org/docs/latest/streaming-programming-guide.html
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/NetworkWordCount.scala
0 0
原创粉丝点击