6.job触发流程原理剖析与源码分析

来源:互联网 发布:华研网络词曲创作大赛 编辑:程序博客网 时间:2024/06/06 01:02
从我们编写的一个小的spark  demo程序开始 :
  1. val lines = sparkContext.textFile("")
  2. val words = lines.flatMap(line => line.split("\t"))
  3. val pairs = words.map(word => (word,1))
  4. val counts = pairs.reduceByKey(_+_)
  5. counts.foreach(count +. println(count_1+"_"+count_2))

在SparkContext中textFile方法源码如下:
  1. /**
  2. * Read a text file from HDFS, a local file system (available on all nodes), or any
  3. * Hadoop-supported file system URI, and return it as an RDD of Strings.
  4. *
  5. * 首先HadoopFile方法的调用会创建一个HadoopRDD , 该RDD中的元素就是一个一个的(key,value),其中key就是文本行的行号,value就是一行的文本值
  6. * 然后调用HadoopRDD的map()方法 , 该方法就会将HadoopRDD中的key移除掉 , 只保留value
  7. * 最后就形成一个只包含文本行的MapPartitionRDD
  8. */
  9. def textFile(path: String, minPartitions: Int = defaultMinPartitions): RDD[String] = {
  10. assertNotStopped()
  11. hadoopFile(path, classOf[TextInputFormat], classOf[LongWritable], classOf[Text],
  12. minPartitions).map(pair => pair._2.toString).setName(path)
  13. }

这其中的原因已经在代码中说明 , 这其中会调用hadoopFile方法 , 源码如下 :
  1. /** Get an RDD for a Hadoop file with an arbitrary InputFormat
  2. *
  3. * '''Note:''' Because Hadoop's RecordReader class re-uses the same Writable object for each
  4. * record, directly caching the returned RDD or directly passing it to an aggregation or shuffle
  5. * operation will create many references to the same object.
  6. * If you plan to directly cache, sort, or aggregate Hadoop writable objects, you should first
  7. * copy them using a `map` function.
  8. */
  9. def hadoopFile[K, V](
  10. path: String,
  11. inputFormatClass: Class[_ <: InputFormat[K, V]],
  12. keyClass: Class[K],
  13. valueClass: Class[V],
  14. minPartitions: Int = defaultMinPartitions
  15. ): RDD[(K, V)] = {
  16. assertNotStopped()
  17. // A Hadoop configuration can be about 10 KB, which is pretty big, so broadcast it.
  18. // 将Hadoop的配置信息作为广播变量 , 方便在每个节点上都可以读取到相同的Hadoop配置信息
  19. val confBroadcast = broadcast(new SerializableWritable(hadoopConfiguration))
  20. val setInputPathsFunc = (jobConf: JobConf) => FileInputFormat.setInputPaths(jobConf, path)
  21. // 创建HadoopRDD对象
  22. new HadoopRDD(
  23. this,
  24. confBroadcast,
  25. Some(setInputPathsFunc),
  26. inputFormatClass,
  27. keyClass,
  28. valueClass,
  29. minPartitions).setName(path)
  30. }

对于创建好的HadoopRDD并经过flatMap算子操作之后形成的算子最后调用reduceByKey的时候会经过一到隐式转换 , 因为在RDD中是没有reduceByKey方法的,
因此在调用reduceByKey时候其实是调用如下的方法 : 
  1. /**
  2. * 其实在RDD里面是没有reduceByKey的 , 因此对RDD调用reduceByKey()方法时会触发scala的隐式转换;此时就会在作用域内寻找隐式转换
  3. * 因此就会在RDD中找到rddToPairRDDFunction()方法,然后调用RDD转换为PairRDDFunction , 在PairRDDFunction中就会调用reduceByKey方法
  4. */
  5. implicit def rddToPairRDDFunctions[K, V](rdd: RDD[(K, V)])
  6. (implicit kt: ClassTag[K], vt: ClassTag[V], ord: Ordering[K] = null): PairRDDFunctions[K, V] = {
  7. new PairRDDFunctions(rdd)
  8. }
从上面的源码中可以看出最后真正的reduceBykey的方法在PairRDDFunctions中 , 源码如下 :
  1. /**
  2. * Merge the values for each key using an associative reduce function. This will also perform
  3. * the merging locally on each mapper before sending results to a reducer, similarly to a
  4. * "combiner" in MapReduce.
  5. */
  6. def reduceByKey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)] = {
  7. combineByKey[V]((v: V) => v, func, func, partitioner)
  8. }
  9. /**
  10. * Merge the values for each key using an associative reduce function. This will also perform
  11. * the merging locally on each mapper before sending results to a reducer, similarly to a
  12. * "combiner" in MapReduce. Output will be hash-partitioned with numPartitions partitions.
  13. */
  14. def reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)] = {
  15. reduceByKey(new HashPartitioner(numPartitions), func)
  16. }
  17. /**
  18. * Merge the values for each key using an associative reduce function. This will also perform
  19. * the merging locally on each mapper before sending results to a reducer, similarly to a
  20. * "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/
  21. * parallelism level.
  22. */
  23. def reduceByKey(func: (V, V) => V): RDD[(K, V)] = {
  24. reduceByKey(defaultPartitioner(self), func)
  25. }

最后 , 不管经过多少transformation算子操作 , 若是没有action的算子操作的话, 那么是不会运行transformation算子的 , 因此foreach方法其实就是触发上面所有算子的操作 , 
而RDD的fereach的源码如下 :
  1. /**
  2. * Applies a function f to all elements of this RDD.
  3. * 若是一个RDD调用了action的算子 , 比如foreach方法 , 那么其实就会触发job的运行 , 最后其实就会调用SparkContext的runjob方法
  4. */
  5. def foreach(f: T => Unit) {
  6. val cleanF = sc.clean(f)
  7. // 调用SparkContext的runjob方法
  8. sc.runJob(this, (iter: Iterator[T]) => iter.foreach(cleanF))
  9. }

最后真正调用的就是SparkContext的runjob方法 进行真正的任务运行 :
  1. /**
  2. * Run a function on a given set of partitions in an RDD and pass the results to the given
  3. * handler function. This is the main entry point for all actions in Spark. The allowLocal
  4. * flag specifies whether the scheduler can run the computation on the driver rather than
  5. * shipping it out to the cluster, for short actions like first().
  6. */
  7. def runJob[T, U: ClassTag](
  8. rdd: RDD[T],
  9. func: (TaskContext, Iterator[T]) => U,
  10. partitions: Seq[Int],
  11. allowLocal: Boolean,
  12. resultHandler: (Int, U) => Unit) {
  13. if (stopped) {
  14. throw new IllegalStateException("SparkContext has been shutdown")
  15. }
  16. val callSite = getCallSite
  17. val cleanedFunc = clean(func)
  18. logInfo("Starting job: " + callSite.shortForm)
  19. if (conf.getBoolean("spark.logLineage", false)) {
  20. logInfo("RDD's recursive dependencies:\n" + rdd.toDebugString)
  21. }
  22. //最重要的就是这一行代码了 , 调用SparkContext中的DAGScheduler组件运行一个job
  23. dagScheduler.runJob(rdd, cleanedFunc, partitions, callSite, allowLocal,
  24. resultHandler, localProperties.get)
  25. progressBar.foreach(_.finishAll())
  26. rdd.doCheckpoint()
  27. }

其实SparkContext中运行job的组件就是之前讲到的DAGScheduler了 , 运行一个job就需要stage的划分了 ,  下面就是stage算法划分了
























阅读全文
0 0
原创粉丝点击