Spark-streaming-scheduler
来源:互联网 发布:京东淘宝区别是什么 编辑:程序博客网 时间:2024/05/20 07:18
Spark-streaming-scheduler
@(spark)[streaming|scheduler]
BatchInfo
/** * :: DeveloperApi :: * Class having information on completed batches. * @param batchTime Time of the batch * @param submissionTime Clock time of when jobs of this batch was submitted to * the streaming scheduler queue * @param processingStartTime Clock time of when the first job of this batch started processing * @param processingEndTime Clock time of when the last job of this batch finished processing */ @DeveloperApi case class BatchInfo( batchTime: Time, receivedBlockInfo: Map[Int, Array[ReceivedBlockInfo]], submissionTime: Long, processingStartTime: Option[Long], processingEndTime: Option[Long] ) {
ReceivedBlockTracker
/** * Class that keep track of all the received blocks, and allocate them to batches * when required. All actions taken by this class can be saved to a write ahead log * (if a checkpoint directory has been provided), so that the state of the tracker * (received blocks and block-to-batch allocations) can be recovered after driver failure. * * Note that when any instance of this class is created with a checkpoint directory, * it will try reading events from logs in the directory. */ private[streaming] class ReceivedBlockTracker( conf: SparkConf, hadoopConf: Configuration, streamIds: Seq[Int], clock: Clock, checkpointDirOption: Option[String]) extends Logging {
#
/** * This class manages the execution of the receivers of ReceiverInputDStreams. Instance of * this class must be created after all input streams have been added and StreamingContext.start() * has been called because it needs the final set of input streams at the time of instantiation. * * @param skipReceiverLaunch Do not launch the receiver. This is useful for testing. */ private[streaming] class ReceiverTracker(ssc: StreamingContext, skipReceiverLaunch: Boolean = false) extends Logging {
它的重点则是
/** This thread class runs all the receivers on the cluster. */ class ReceiverLauncher { d
Job
/** * Class representing a Spark computation. It may contain multiple Spark jobs. */ private[streaming] class Job(val time: Time, func: () => _) {
JobSet
/** Class representing a set of Jobs * belong to the same batch. */ private[streaming] case class JobSet( time: Time, jobs: Seq[Job], receivedBlockInfo: Map[Int, Array[ReceivedBlockInfo]] = Map.empty ) {
#
0 0
- Spark-streaming-scheduler
- Spark-scheduler
- Spark Scheduler
- Spark Streaming
- spark streaming
- Spark/Streaming
- Spark Streaming
- spark streaming
- Spark Streaming
- Spark Streaming
- Spark Streaming
- Spark Streaming
- spark streaming
- Spark Streaming
- Spark Streaming
- Spark Streaming
- Spark Streaming
- Spark Streaming
- 谷歌I/O 2015 android 新特性 Data Binding Library(一)
- Android System.gc()注意点
- VS代码生成工具ReSharper使用手册:配置快捷键
- Android版CSDN发现的一些问题
- 又换新工作了...
- Spark-streaming-scheduler
- Mac上使用命令行安装brew,并通过brew安装Ant等工具
- RDIFramework.NETV2.9版本 Web新增至14套皮肤风格+三套界面组合(共42套皮肤组合)
- Codevs2188最长上升子序列题解
- [python] 密码学:置换密码的实现
- Gradle学习系列之三——读懂Gradle语法
- C/C++中strlen和sizeof的深度认识
- js判断是否是数字
- android:layout_gravity="bottom"不起作用问题