第225讲:Spark Shuffle Pluggable框架SortShuffle解析以及创建源码详解

来源:互联网 发布:javascript if 编辑:程序博客网 时间:2024/06/07 04:53

第225讲:Spark Shuffle Pluggable框架SortShuffle解析以及创建源码详解


Spark Shuffle在最开始的时候,只支持Hash-Based Shuffle,而Hash-Based Shuffle默认Mapper阶段会为Reducer阶段的每一个Task,单独创建一个文件来保存该task中要使用的数据,但是在一些情况下(例如数据量非常大的情况)会造成大量文件(M*R,M代表Mapper中的所有的并行任务数量,R代表Reducer中所有的并行任务数量)的随机磁盘IO操作,且会形成大量的memory消耗,极易造成OOM,这是致命的问题,第一不能处理大规模的数据,第二Spark不能运行在大规模的分布式集群上!后来的方案是加入了Shuffle Consolidate机制来将Shuffle时候产生的文件数量减少到C*R个(C代表在Mapper端同时能够使用的cores的数量,R代表Reducer中所有的并行任务数量),但是此时如果Reducer端的并行数据分片(或者任务)过多的话,则C*R可能依旧过大,此时依旧没有逃脱文件打开过多的厄运;


Spark(Spark 1.1版本之前)在引入Sort-Based Shuffle之前,比较适用于中小规模的大数据处理。


 为了让Spark在更大规模的集群上更高性能的处理更大规模的数据,于是就引入了Sort-Based Shuffle(Spark1 1.1版本开始),Spark可以胜任任意规模(包含PB级别及PB以上的级别)大数据的处理,尤其是随着钨丝计划的引入和优化,把Spark更快速的在更大规模的集群处理更海量的数据的能力推向了一个巅峰!


Sort-Based Shuffle不会为每个Reducer中的Task生成一个单独的文件,相反,Sort-Based Shuffle 会把Mapper中每个ShuffleMapTask所有的输出数据只写入一个文件中,因为每个ShuffleMapTask中的数据会被分类,所以Sort-Based Shuffle 使用了index文件存储具体ShuffleMapTask输出数据在同一个Data中是如何进行分类的信息(数据要分类,才要Shuffle,Shuffle之后还要聚类)。所以说Sort-Based Shuffle会在Mapper中的每一个Shuffle Map Task中产生两个文件,其中Data是存储Shuffle当前Task的Shuffle输出的,Index文件中存储数据了Data文件中的数据通过Partitioner的分类信息。此时下一个阶段的Stage中的Task就是根据这个Index文件获取自己所要抓取的上一个Stage中的Shuffle Map Task产生的数据的。


Sort-Based Shuffle产生的临时文件的数量正确的答案是2M(M代表Mapper中并行partition的总数量,其实就是Mapper端Task的总数量,这和实际并行的数量是两码事);


回顾整个Shuffle的历史,Shuffle产生的临时文件的数量的变化为依次为:


Basic Hash Shuffle M*R;


Consalisate方式Hash Shuffle C*R;


Sort-Based Shuffle 2M;




引入SortShuffle的方式是为了解决问题,解决什么问题呢,这个问题就是Spark无法应对大规模集群和大规模任务的问题。

SparkEnv创建

    val shortShuffleMgrNames = Map(      "sort" -> classOf[org.apache.spark.shuffle.sort.SortShuffleManager].getName,      "tungsten-sort" -> classOf[org.apache.spark.shuffle.sort.SortShuffleManager].getName)    val shuffleMgrName = conf.get("spark.shuffle.manager", "sort")    val shuffleMgrClass = shortShuffleMgrNames.getOrElse(shuffleMgrName.toLowerCase, shuffleMgrName)    val shuffleManager = instantiateClass[ShuffleManager](shuffleMgrClass)
SortShuffleManager的源代码
在sort-based shuffle中,传入的记录根据其目标分区ID进行排序,然后
写入单个map输出文件。Reducers fetch获取此文件的连续区域以
读取map输出部分。在map内存输出数据过大的情况下 ,输出的排序子集可以溢出到磁盘,磁盘上的文件合并
生成最终输出文件。
Sort-based shuffle有两个不同的写路径生成map输出文件:
一 序列化排序:当所有下列三个条件保持时使用:
1、shuffle依赖没有聚合或输出排序。
2、shuffle序列化程序支持序列化值的迁移(通过KryoSerializer和Spark SQL自定义序列化程序支持)。
3、shuffle产生少于16777216个输出分区。

二 反序列化排序:用于处理所有其他cases。
private[spark] class SortShuffleManager(conf: SparkConf) extends ShuffleManager with Logging {  if (!conf.getBoolean("spark.shuffle.spill", true)) {    logWarning(      "spark.shuffle.spill was set to false, but this configuration is ignored as of Spark 1.6+." +        " Shuffle will continue to spill to disk when necessary.")  }  /**   * A mapping from shuffle ids to the number of mappers producing output for those shuffles.   */  private[this] val numMapsForShuffle = new ConcurrentHashMap[Int, Int]()  override val shuffleBlockResolver = new IndexShuffleBlockResolver(conf)  /**   * Register a shuffle with the manager and obtain a handle for it to pass to tasks.   */  override def registerShuffle[K, V, C](      shuffleId: Int,      numMaps: Int,      dependency: ShuffleDependency[K, V, C]): ShuffleHandle = {    if (SortShuffleWriter.shouldBypassMergeSort(SparkEnv.get.conf, dependency)) {      // If there are fewer than spark.shuffle.sort.bypassMergeThreshold partitions and we don't      // need map-side aggregation, then write numPartitions files directly and just concatenate      // them at the end. This avoids doing serialization and deserialization twice to merge      // together the spilled files, which would happen with the normal code path. The downside is      // having multiple files open at a time and thus more memory allocated to buffers.      new BypassMergeSortShuffleHandle[K, V](        shuffleId, numMaps, dependency.asInstanceOf[ShuffleDependency[K, V, V]])    } else if (SortShuffleManager.canUseSerializedShuffle(dependency)) {      // Otherwise, try to buffer map outputs in a serialized form, since this is more efficient:      new SerializedShuffleHandle[K, V](        shuffleId, numMaps, dependency.asInstanceOf[ShuffleDependency[K, V, V]])    } else {      // Otherwise, buffer map outputs in a deserialized form:      new BaseShuffleHandle(shuffleId, numMaps, dependency)    }  }  /**   * Get a reader for a range of reduce partitions (startPartition to endPartition-1, inclusive).   * Called on executors by reduce tasks.   */  override def getReader[K, C](      handle: ShuffleHandle,      startPartition: Int,      endPartition: Int,      context: TaskContext): ShuffleReader[K, C] = {    new BlockStoreShuffleReader(      handle.asInstanceOf[BaseShuffleHandle[K, _, C]], startPartition, endPartition, context)  }  /** Get a writer for a given partition. Called on executors by map tasks. */  override def getWriter[K, V](      handle: ShuffleHandle,      mapId: Int,      context: TaskContext): ShuffleWriter[K, V] = {    numMapsForShuffle.putIfAbsent(      handle.shuffleId, handle.asInstanceOf[BaseShuffleHandle[_, _, _]].numMaps)    val env = SparkEnv.get    handle match {      case unsafeShuffleHandle: SerializedShuffleHandle[K @unchecked, V @unchecked] =>        new UnsafeShuffleWriter(          env.blockManager,          shuffleBlockResolver.asInstanceOf[IndexShuffleBlockResolver],          context.taskMemoryManager(),          unsafeShuffleHandle,          mapId,          context,          env.conf)      case bypassMergeSortHandle: BypassMergeSortShuffleHandle[K @unchecked, V @unchecked] =>        new BypassMergeSortShuffleWriter(          env.blockManager,          shuffleBlockResolver.asInstanceOf[IndexShuffleBlockResolver],          bypassMergeSortHandle,          mapId,          context,          env.conf)      case other: BaseShuffleHandle[K @unchecked, V @unchecked, _] =>        new SortShuffleWriter(shuffleBlockResolver, other, mapId, context)    }  }  /** Remove a shuffle's metadata from the ShuffleManager. */  override def unregisterShuffle(shuffleId: Int): Boolean = {    Option(numMapsForShuffle.remove(shuffleId)).foreach { numMaps =>      (0 until numMaps).foreach { mapId =>        shuffleBlockResolver.removeDataByMap(shuffleId, mapId)      }    }    true  }  /** Shut down this ShuffleManager. */  override def stop(): Unit = {    shuffleBlockResolver.stop()  }}private[spark] object SortShuffleManager extends Logging {  /**   * The maximum number of shuffle output partitions that SortShuffleManager supports when   * buffering map outputs in a serialized form. This is an extreme defensive programming measure,   * since it's extremely unlikely that a single shuffle produces over 16 million output partitions.   * */  val MAX_SHUFFLE_OUTPUT_PARTITIONS_FOR_SERIALIZED_MODE =    PackedRecordPointer.MAXIMUM_PARTITION_ID + 1  /**   * Helper method for determining whether a shuffle should use an optimized serialized shuffle   * path or whether it should fall back to the original path that operates on deserialized objects.   */  def canUseSerializedShuffle(dependency: ShuffleDependency[_, _, _]): Boolean = {    val shufId = dependency.shuffleId    val numPartitions = dependency.partitioner.numPartitions    if (!dependency.serializer.supportsRelocationOfSerializedObjects) {      log.debug(s"Can't use serialized shuffle for shuffle $shufId because the serializer, " +        s"${dependency.serializer.getClass.getName}, does not support object relocation")      false    } else if (dependency.aggregator.isDefined) {      log.debug(        s"Can't use serialized shuffle for shuffle $shufId because an aggregator is defined")      false    } else if (numPartitions > MAX_SHUFFLE_OUTPUT_PARTITIONS_FOR_SERIALIZED_MODE) {      log.debug(s"Can't use serialized shuffle for shuffle $shufId because it has more than " +        s"$MAX_SHUFFLE_OUTPUT_PARTITIONS_FOR_SERIALIZED_MODE partitions")      false    } else {      log.debug(s"Can use serialized shuffle for shuffle $shufId")      true    }  }}/** * Subclass of [[BaseShuffleHandle]], used to identify when we've chosen to use the * serialized shuffle. */private[spark] class SerializedShuffleHandle[K, V](  shuffleId: Int,  numMaps: Int,  dependency: ShuffleDependency[K, V, V])  extends BaseShuffleHandle(shuffleId, numMaps, dependency) {}/** * Subclass of [[BaseShuffleHandle]], used to identify when we've chosen to use the * bypass merge sort shuffle path. */private[spark] class BypassMergeSortShuffleHandle[K, V](  shuffleId: Int,  numMaps: Int,  dependency: ShuffleDependency[K, V, V])  extends BaseShuffleHandle(shuffleId, numMaps, dependency) {}



0 0
原创粉丝点击