Spark高阶排序算法

来源:互联网 发布:米巧铭作品好俗气知乎 编辑:程序博客网 时间:2024/05/17 05:02

第十九课:Spark高级排序算法彻底解密

本期内容:

1、基础排序算法

2、二次排序算法

3、更高级排序算法

4、排序算法内幕


准备:

启动Hadoop:./start-dfs.sh

启动history:./start-history-server.sh  

启动spark:./start-all.sh

启动spark-shell

(实现广告点击排名的算法(最原始排序算法,只有key,value)):

  val conf = new SparkConf().setMaster("The Transformation").setMaster("local")  //创建SparkContext对象,这是RDD创建的唯一入口,也是Driver的灵魂,是通往集群的唯一通道  val sc = new SparkContext(conf)  val line = sc.textFile("C:\\Users\\css-kxr\\Music\\Big_Data_Software\\spark-1.6.0-bin-hadoop2.6")  rank = lines.flatMap(line => line.split(" ")).map(word =>(word,1)).reduceByKey(_+_).map(w =>(w._2,w._1))      .sortByKey(false).collect().reverse.map(words =>(words._2,words._1)).foreach(println)
sortByKey源码

  /**   * Sort the RDD by key, so that each partition contains a sorted range of the elements. Calling   * `collect` or `save` on the resulting RDD will return or output an ordered list of records   * (in the `save` case, they will be written to multiple `part-X` files in the filesystem, in   * order of the keys).   */  // TODO: this currently doesn't work on P other than Tuple2!  def sortByKey(ascending: Boolean = true, numPartitions: Int = self.partitions.length)      : RDD[(K, V)] = self.withScope  {    val part = new RangePartitioner(numPartitions, self, ascending)    new ShuffledRDD[K, V, V](self, part)      .setKeyOrdering(if (ascending) ordering else ordering.reverse)  }

基础排序算法:
/**  * Created by css-kxr on 2016/1/24.  * 实现二次排序  */class SecondarySortKey(val first:Int,val second:Int)extends Ordered[SecondarySortKey] with Serializable{  def compare(other:SecondarySortKey):Int ={    if (this.first - other.first !=0)      this.first - other.first    else this.second - other.second  }}
二次排序算法:
import org.apache.spark.{SparkContext, SparkConf}/**  * Created by css-kxr on 2016/1/24.  * 实现二次排序  * 二次排序,具体实现步骤  * 第一步:按照Ordered和Serilizable实现自定义排序的Key  * 第二步:将要进行二次排序的文件加载进行生成<Key,value>类型的RDD  * 第三步:使用SortByKey基于自定义的Key进行二次排序  * 第四步:去除掉排序的Key,只保留排序的结果  */object SecondarySortAPP {  def main(args: Array[String]) {  // 创建SparkConf对象,初始化Transformation的配置运行的参数  val conf = new SparkConf().setMaster("The Transformation").setMaster("local")  //创建SparkContext对象,这是RDD创建的唯一入口,也是Driver的灵魂,是通往集群的唯一通道  val sc = new SparkContext(conf)  val line = sc.textFile("C:\\Users\\css-kxr\\Music\\Big_Data_Software\\spark-1.6.0-bin-hadoop2.6")  val withSortApp = line.map(line => (   // val splited = line.split(" ")    new SecondarySortKey(line.split(" ")(0).toInt,line.split(" ")(1).toInt),line    ))  val sorted = withSortApp.sortByKey(false)  val sortResult = sorted.map(sortedline =>sortedline._2)  sortResult.collect.foreach(println)  }}




0 0