第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
来源:互联网 发布:淘宝48分店铺怎样解封 编辑:程序博客网 时间:2024/05/22 06:24
本期内容
Dstream与rdd关系彻底研究
Dstream中rdd生成彻底研究
从整个sparkstreaming角度来考虑,rdd分为三个方面的内容:
a.怎么生成的,依靠什么生成的;
b.runtime角度,具体执行的时候与sparkcore上的rdd的执行是否有不同;
c.每个batch duration运行完之后对rdd怎么处理。
本讲主要讲rdd生成全生命周期的彻底研究。
sparkstreaming中Dstream的output operation,有print(),saveAsTextFiles(),foreachRDD()等,其实最终所有的output operation都调用了foreachRDD(),而foreachRDD()方法会产生ForEachDstream这个Dstream。
当我们在程序中调用foreachRDD()外的其他output operation时,起背后会调用foreachRDD()进而产生ForEachDstream,且会触发Job的执行;
当我们在程序中调用foreachRDD()这个output operation时,会产生ForEachDstream,且如果foreachRDD()内部有action操作会触发Job的执行;如果foreachRDD()内部没有action操作,则不会触发Job的执行。
所以说ForEachDstream是transformation,会触发job的产生,但不一定会触发Job的执行。当然Job真正产生是由框架中的timer定时触发的,跟我们的程序代码没有关系。
总结起来说,ForEachDstream有两种产生方式:
a.由ACTION产生,此时会有Job产生且有Job执行,因为action会翻译成RDD的action;
b.由foreachRDD()产生,且如果foreachRDD()内部没有action,则不会执行Job。
如下可见:
/** * Print the first num elements of each RDD generated in this DStream. This is an output * operator, so this DStream will be registered as an output stream and there materialized. */ def print(num: Int): Unit = ssc.withScope { def foreachFunc: (RDD[T], Time) => Unit = { (rdd: RDD[T], time: Time) => { val firstNum = rdd.take(num + 1) // scalastyle:off println println("-------------------------------------------") println("Time: " + time) println("-------------------------------------------") firstNum.take(num).foreach(println) if (firstNum.length > num) println("...") println() // scalastyle:on println } } foreachRDD(context.sparkContext.clean(foreachFunc), displayInnerRDDOps = false) }
/** * Save each RDD in this DStream as at text file, using string representation * of elements. The file name at each batch interval is generated based on * `prefix` and `suffix`: "prefix-TIME_IN_MS.suffix". */ def saveAsTextFiles(prefix: String, suffix: String = ""): Unit = ssc.withScope { val saveFunc = (rdd: RDD[T], time: Time) => { val file = rddToFileName(prefix, suffix, time) rdd.saveAsTextFile(file) } this.foreachRDD(saveFunc, displayInnerRDDOps = false) }
foreachRDD()的源码如下:
/** * Apply a function to each RDD in this DStream. This is an output operator, so * 'this' DStream will be registered as an output stream and therefore materialized. * * @param foreachFunc foreachRDD function * @param displayInnerRDDOps Whether the detailed callsites and scopes of the RDDs generated * in the `foreachFunc` to be displayed in the UI. If `false`, then * only the scopes and callsites of `foreachRDD` will override those * of the RDDs on the display. */ private def foreachRDD( foreachFunc: (RDD[T], Time) => Unit, displayInnerRDDOps: Boolean): Unit = { new ForEachDStream(this, context.sparkContext.clean(foreachFunc, false), displayInnerRDDOps).register() }
RDD的action不会产生rdd,Dstream的action也不会产生rdd.
foreachRDD方法是sparkStreaming的后门,使你直接基于rdd进行操作,而背后还是产生ForEachDStream。
DStreams internally is characterized by a few basic properties:
// Dstream之间有依赖关系
- A list of other DStreams that the DStream depends on
////Dstream在计算时定期产生RDD
- A time interval at which the DStream generates an RDD
- A function that is used to generate an RDD after each time interval
DStreams的具体实现TransformedDStream:
private[streaming]class TransformedDStream[U: ClassTag] ( parents: Seq[DStream[_]], transformFunc: (Seq[RDD[_]], Time) => RDD[U] ) extends DStream[U](parents.head.ssc) { require(parents.length > 0, "List of DStreams to transform is empty") require(parents.map(_.ssc).distinct.size == 1, "Some of the DStreams have different contexts") require(parents.map(_.slideDuration).distinct.size == 1, "Some of the DStreams have different slide durations") override def dependencies: List[DStream[_]] = parents.toList override def slideDuration: Duration = parents.head.slideDuration //DStreams的计算会产生RDD,所以我们说DStreams是逻辑级别的,是RDD的模板 override def compute(validTime: Time): Option[RDD[U]] = { val parentRDDs = parents.map { parent => parent.getOrCompute(validTime).getOrElse( // Guard out against parent DStream that return None instead of Some(rdd) to avoid NPE throw new SparkException(s"Couldn't generate RDD from parent at time $validTime")) } val transformedRDD = transformFunc(parentRDDs, validTime) if (transformedRDD == null) { throw new SparkException("Transform function must not return null. " + "Return SparkContext.emptyRDD() instead to represent no element " + "as the result of transformation.") } Some(transformedRDD) } /** * Wrap a body of code such that the call site and operation scope * information are passed to the RDDs created in this body properly. * This has been overriden to make sure that `displayInnerRDDOps` is always `true`, that is, * the inner scopes and callsites of RDDs generated in `DStream.transform` are always * displayed in the UI. */ override protected[streaming] def createRDDWithLocalProperties[U]( time: Time, displayInnerRDDOps: Boolean)(body: => U): U = { super.createRDDWithLocalProperties(time, displayInnerRDDOps = true)(body) }}
DStreams的具体实现ForEachDStream:
** * An internal DStream used to represent output operations like DStream.foreachRDD. * @param parent Parent DStream * @param foreachFunc Function to apply on each RDD generated by the parent DStream * @param displayInnerRDDOps Whether the detailed callsites and scopes of the RDDs generated * by `foreachFunc` will be displayed in the UI; only the scope and * callsite of `DStream.foreachRDD` will be displayed. */private[streaming]class ForEachDStream[T: ClassTag] ( parent: DStream[T], foreachFunc: (RDD[T], Time) => Unit, displayInnerRDDOps: Boolean ) extends DStream[Unit](parent.ssc) { //DStream依赖关系 override def dependencies: List[DStream[_]] = List(parent) override def slideDuration: Duration = parent.slideDuration override def compute(validTime: Time): Option[RDD[Unit]] = None //ForEachDStream会产生job override def generateJob(time: Time): Option[Job] = { parent.getOrCompute(time) match { case Some(rdd) => val jobFunc = () => createRDDWithLocalProperties(time, displayInnerRDDOps) { foreachFunc(rdd, time) } Some(new Job(time, jobFunc)) case None => None } }}
实质上,在整个streaming的操作中,所有的操作都会产生DStream,都是transformation,只不过在映射成物理级别的RDD的操作时,有些操作(OUTPUT OPERATION)会映射成RDD的ACTION,触发Job的执行。
首先产生的DStream有InputDStream,然后经由各种 Transformations on DStreams,和最终的
Output Operations on DStreams,产生foreachDStream.
同RDD一样,DStream从后往前依赖,且是lazy级别。
DStream是RDD的模板,DStreamGraph是DAG的模板。
// 类型是 SocketInputDStream,属于InputDStreamval lines = ssc.socketTextStream("localhost", 9999)// 类型是 FlatMappedDStreamval words = lines.flatMap(_.split(" ")) // 类型是 MappedDStreamval pairs = words.map(word => (word, 1)) // 类型是 ShuffledDStreamval wordCounts = pairs.reduceByKey(_ + _) // 类型是 ForeachDStreamwordCounts.print()
DStream有个成员generatedRDDs,所以逻辑上每个DStream实例都有个generatedRDDs;但实质物理执行上,DStream之间有依赖关系,从后往前推,只有最后一个DStream的句柄,执行的时候只有最后一个DStream,这跟RDD一样,就是函数的展开。
// RDDs generated, marked as private[streaming] so that testsuites can access it @transient //每个TIME对应一个RDD,每个RDD对应一个job //RDD有依赖关系,从后往前回溯可以得到所有RDD private[streaming] var generatedRDDs = new HashMap[Time, RDD[T]] ()
generatedRDDs是怎么产生的呢?DStream的getOrCompute(time: Time)方法。
/** * Get the RDD corresponding to the given time; either retrieve it from cache * or compute-and-cache it. */ private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = { // If RDD was already generated, then retrieve it from HashMap, // or else compute the RDD generatedRDDs.get(time).orElse { //每个滑动窗口都会产生RDD // Compute the RDD if time is valid (e.g. correct time in a sliding window) // of RDD generation, else generate nothing. if (isTimeValid(time)) { val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) { // Disable checks for existing output directories in jobs launched by the streaming // scheduler, since we may need to write output to an existing directory during checkpoint // recovery; see SPARK-4835 for more details. We need to have this call here because // compute() might cause Spark jobs to be launched. PairRDDFunctions.disableOutputSpecValidation.withValue(true) { compute(time) } } rddOption.foreach { case newRDD => // Register the generated RDD for caching and checkpointing if (storageLevel != StorageLevel.NONE) { newRDD.persist(storageLevel) logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel") } if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) { newRDD.checkpoint() logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing") } generatedRDDs.put(time, newRDD) } rddOption } else { None } } }
DStream的子类ReceiverInputDStream的compute(validTime: Time)方法:
/** * Generates RDDs with blocks received by the receiver of this stream. */ override def compute(validTime: Time): Option[RDD[T]] = { val blockRDD = { if (validTime < graph.startTime) { // If this is called for any time before the start time of the context, // then this returns an empty RDD. This may happen when recovering from a // driver failure without any write ahead log to recover pre-failure data. new BlockRDD[T](ssc.sc, Array.empty) } else { // Otherwise, ask the tracker for all the blocks that have been allocated to this stream // for this batch //从receiverTracker拿到从输入源的取得的数据 val receiverTracker = ssc.scheduler.receiverTracker val blockInfos = receiverTracker.getBlocksOfBatch(validTime).getOrElse(id, Seq.empty) // Register the input blocks information into InputInfoTracker val inputInfo = StreamInputInfo(id, blockInfos.flatMap(_.numRecords).sum) ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo) // Create the BlockRDD createBlockRDD(validTime, blockInfos) } } Some(blockRDD) }
private[streaming] def createBlockRDD(time: Time, blockInfos: Seq[ReceivedBlockInfo]): RDD[T] = { if (blockInfos.nonEmpty) { val blockIds = blockInfos.map { _.blockId.asInstanceOf[BlockId] }.toArray // Are WAL record handles present with all the blocks val areWALRecordHandlesPresent = blockInfos.forall { _.walRecordHandleOption.nonEmpty } if (areWALRecordHandlesPresent) { // If all the blocks have WAL record handle, then create a WALBackedBlockRDD val isBlockIdValid = blockInfos.map { _.isBlockIdValid() }.toArray val walRecordHandles = blockInfos.map { _.walRecordHandleOption.get }.toArray new WriteAheadLogBackedBlockRDD[T]( ssc.sparkContext, blockIds, walRecordHandles, isBlockIdValid) } else { // Else, create a BlockRDD. However, if there are some blocks with WAL info but not // others then that is unexpected and log a warning accordingly. if (blockInfos.find(_.walRecordHandleOption.nonEmpty).nonEmpty) { if (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) { logError("Some blocks do not have Write Ahead Log information; " + "this is unexpected and data may not be recoverable after driver failures") } else { logWarning("Some blocks have Write Ahead Log information; this is unexpected") } } //再次检验block是否还存在 val validBlockIds = blockIds.filter { id => ssc.sparkContext.env.blockManager.master.contains(id) } if (validBlockIds.size != blockIds.size) { logWarning("Some blocks could not be recovered as they were not found in memory. " + "To prevent such data loss, enabled Write Ahead Log (see programming guide " + "for more details.") } new BlockRDD[T](ssc.sc, validBlockIds) } } else { // If no block is ready now, creating WriteAheadLogBackedBlockRDD or BlockRDD // according to the configuration if (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) { new WriteAheadLogBackedBlockRDD[T]( ssc.sparkContext, Array.empty, Array.empty, Array.empty) } else { //没有输入时也会产生RDD,只不过是空的 new BlockRDD[T](ssc.sc, Array.empty) } } }
再来看下DStream的子类MappedDStream的compute(validTime: Time)方法:
private[streaming]class MappedDStream[T: ClassTag, U: ClassTag] ( parent: DStream[T], mapFunc: T => U ) extends DStream[U](parent.ssc) { override def dependencies: List[DStream[_]] = List(parent) override def slideDuration: Duration = parent.slideDuration override def compute(validTime: Time): Option[RDD[U]] = { //getOrCompute()会生成RDD,也就是说这里是从父DStream产生RDD //所以说虽然逻辑上有很多RDD,但其实只有一个,从后往前推 //这里的map是对RDD进行操作,所以说DStream的计算其实是对RDD进行计算 parent.getOrCompute(validTime).map(_.map[U](mapFunc)) }}
每个DSTEAM在计算时都会生成RDD。第一个DStream需要自己生成RDD,除了第一个DStream,都是从parent获取RDD然后对它进行计算,然后返回RDD.也就是说DStream的操作compute()方法返回的是RDD,然后这个RDD被DStream封装了一下,作为方法的成员,而计算本身是物理级别的。对DStream的transformation操作,就作用于对RDD的transformation操作,只不过这种完美映射关系要加上时间维度。
我们再来看有可能产生ACTION的DStream:ForEachDStream。
** * An internal DStream used to represent output operations like DStream.foreachRDD. * @param parent Parent DStream * @param foreachFunc Function to apply on each RDD generated by the parent DStream * @param displayInnerRDDOps Whether the detailed callsites and scopes of the RDDs generated * by `foreachFunc` will be displayed in the UI; only the scope and * callsite of `DStream.foreachRDD` will be displayed. */private[streaming]class ForEachDStream[T: ClassTag] ( parent: DStream[T], foreachFunc: (RDD[T], Time) => Unit, displayInnerRDDOps: Boolean ) extends DStream[Unit](parent.ssc) { override def dependencies: List[DStream[_]] = List(parent) override def slideDuration: Duration = parent.slideDuration //这里的compute什么都没做,真正调用的还是generateJob override def compute(validTime: Time): Option[RDD[Unit]] = None //generateJob()是被调度器控制的,不是我们的DStream控制的 override def generateJob(time: Time): Option[Job] = { parent.getOrCompute(time) match { case Some(rdd) => //jobFunc是具体要执行的函数,封装了起来 val jobFunc = () => createRDDWithLocalProperties(time, displayInnerRDDOps) { foreachFunc(rdd, time) } //New的Job就是业务逻辑,是个runnable对象 Some(new Job(time, jobFunc)) case None => None } }}
foreachFunc(rdd, time)一般是输出函数,会导致output的action操作。在具体的时间上作用与RDD.来看个具体的foreachFunc的操作。
/** * Print the first num elements of each RDD generated in this DStream. This is an output * operator, so this DStream will be registered as an output stream and there materialized. */ def print(num: Int): Unit = ssc.withScope { def foreachFunc: (RDD[T], Time) => Unit = { (rdd: RDD[T], time: Time) => { val firstNum = rdd.take(num + 1) // scalastyle:off println println("-------------------------------------------") println("Time: " + time) println("-------------------------------------------") firstNum.take(num).foreach(println) if (firstNum.length > num) println("...") println() // scalastyle:on println } } foreachRDD(context.sparkContext.clean(foreachFunc), displayInnerRDDOps = false) }
JobGenerator的generateJobs(time: Time)方法调用DStreamGraph.generateJobs(time):
/** Generate jobs and perform checkpoint for the given `time`. */ private def generateJobs(time: Time) { // Set the SparkEnv in this thread, so that job generation code can access the environment // Example: BlockRDDs are created in this thread, and it needs to access BlockManager // Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed. SparkEnv.set(ssc.env) Try { jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch graph.generateJobs(time) // generate jobs using allocated block } match { case Success(jobs) => val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time) jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos)) case Failure(e) => jobScheduler.reportError("Error generating jobs for time " + time, e) } eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false)) }
DStreamGraph.generateJobs(time)调用outputStream.generateJob(time)方法:
def generateJobs(time: Time): Seq[Job] = { logDebug("Generating jobs for time " + time) val jobs = this.synchronized { outputStreams.flatMap { outputStream => val jobOption = outputStream.generateJob(time) jobOption.foreach(_.setCallSite(outputStream.creationSite)) jobOption } } logDebug("Generated " + jobs.length + " jobs for time " + time) jobs }
outputStream的一个具体实现ForEachDStream的generateJob(time: Time)方法:
override def generateJob(time: Time): Option[Job] = { parent.getOrCompute(time) match { case Some(rdd) => val jobFunc = () => createRDDWithLocalProperties(time, displayInnerRDDOps) { foreachFunc(rdd, time) } Some(new Job(time, jobFunc)) case None => None } }
- 第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- 第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- 第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- 第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- 第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- Spark定制班第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- Spark 定制版:008~Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- 第9课:Spark Streaming源码解读之Receiver在Driver的精妙实现全生命周期彻底研究和思考
- 第9课:Spark Streaming源码解读之Receiver在Driver的精妙实现全生命周期彻底研究和思考
- 第9课:Spark Streaming源码解读之Receiver在Driver的精妙实现全生命周期彻底研究和思考
- 第9课:Spark Streaming源码解读之Receiver在Driver的精妙实现全生命周期彻底研究和思考
- 第9课:Spark Streaming源码解读之Receiver在Driver的精妙实现全生命周期彻底研究和思考
- Spark定制班第10课:Spark Streaming源码解读之流数据不断接收全生命周期彻底研究和思考
- 第10课:Spark Streaming源码解读之流数据不断接收全生命周期彻底研究和思考
- 第10课:Spark Streaming源码解读之流数据不断接收全生命周期彻底研究和思考
- 第10课:Spark Streaming源码解读之流数据不断接收全生命周期彻底研究和思考
- session超时时间设置方法
- Mysql一主多从和读写分离配置简记
- FFmpeg之零概述(待续)
- 安卓UI设计中fill_parent、wrap_content和match_parent的区别
- mac 下安装React Native报错npm 安装错误 npm ERR! Darwin 13.4.0
- 第8课:Spark Streaming源码解读之RDD生成全生命周期彻底研究和思考
- 病毒分析,病毒原理,ASLR,DEP,EPO
- 最全防止sql注入方法
- MySQL优化必须调整的10项配置
- Nginx负载均衡配置实例详解
- MFC显示JPG图片
- Spring之JMX
- PHP CURL CURLOPT参数说明(curl_setopt)
- RelativeLayout各个属性