spark二次排序

来源:互联网 发布:淘宝省钱网站 编辑:程序博客网 时间:2024/05/16 14:16

原始数据:

[root@iteblog.com /tmp]# vim data.txt 2015,1,242015,3,562015,1,32015,2,-432015,4,52015,3,462014,2,642015,1,42015,1,212015,2,352015,2,0

期望数据:

2014-2  642015-1  3,4,21,242015-2  -43,0,352015-3  46,562015-4  5

代码实现:

scala> val file = sc.textFile("/tmp/data.txt")file: org.apache.spark.rdd.RDD[String] = /tmp/data.txt MapPartitionsRDD[1] at textFile at <console>:27scala> val data = file.map(_.split(",")).map(item => (s"${item(0)}-${item(1)}", item(2)))data: org.apache.spark.rdd.RDD[(String, String)] = MapPartitionsRDD[3] at map at <console>:29scala> data.collect().foreach(println)(2015-1,24)                                                                     (2015-3,56)(2015-1,3)(2015-2,-43)(2015-4,5)(2015-3,46)(2014-2,64)(2015-1,4)(2015-1,21)(2015-2,35)(2015-2,0)scala> val rdd = data.groupByKeyrdd: org.apache.spark.rdd.RDD[(String, Iterable[String])] = ShuffledRDD[5] at groupByKey at <console>:31scala> rdd.collect().foreach(println)(2014-2,CompactBuffer(64))                                                      (2015-1,CompactBuffer(24, 3, 4, 21))(2015-2,CompactBuffer(35, 0, -43))(2015-3,CompactBuffer(56, 46))(2015-4,CompactBuffer(5))scala> val result = rdd.map(item => (item._1, item._2.toList.sortWith(_.toInt<_.toInt)))result: org.apache.spark.rdd.RDD[(String, List[String])] = MapPartitionsRDD[20] at map at <console>:33scala> result.collect.foreach(item => println(s"${item._1}\t${item._2.mkString(",")}"))2014-2  642015-1  3,4,21,242015-2  -43,0,352015-3  46,562015-4  5

可以看出,使用Spark来解决这个问题非常地简单。上面的CompactBuffer实现了Scala中的Seq类,所以可以直接转换成List或者Array,然后直接使用Scala中的排序即可。

0 0
原创粉丝点击