Spark算子:RDD键值转换操作(1)–partitionBy、mapValues、flatMapValues

来源:互联网 发布:什么是注解java 编辑:程序博客网 时间:2024/06/06 00:15

关键字:Spark算子、Spark RDD键值转换、partitionBy、mapValues、flatMapValues

partitionBy

def partitionBy(partitioner: Partitioner): RDD[(K, V)]

该函数根据partitioner函数生成新的ShuffleRDD,将原RDD重新分区。

  1. scala> var rdd1 = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")),2)
  2. rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[23] at makeRDD at :21
  3.  
  4. scala> rdd1.partitions.size
  5. res20: Int = 2
  6.  
  7. //查看rdd1中每个分区的元素
  8. scala> rdd1.mapPartitionsWithIndex{
  9. | (partIdx,iter) => {
  10. | var part_map = scala.collection.mutable.Map[String,List[(Int,String)]]()
  11. | while(iter.hasNext){
  12. | var part_name = "part_" + partIdx;
  13. | var elem = iter.next()
  14. | if(part_map.contains(part_name)) {
  15. | var elems = part_map(part_name)
  16. | elems ::= elem
  17. | part_map(part_name) = elems
  18. | } else {
  19. | part_map(part_name) = List[(Int,String)]{elem}
  20. | }
  21. | }
  22. | part_map.iterator
  23. |
  24. | }
  25. | }.collect
  26. res22: Array[(String, List[(Int, String)])] = Array((part_0,List((2,B), (1,A))), (part_1,List((4,D), (3,C))))
  27. //(2,B),(1,A)在part_0中,(4,D),(3,C)在part_1中
  28.  
  29. //使用partitionBy重分区
  30. scala> var rdd2 = rdd1.partitionBy(new org.apache.spark.HashPartitioner(2))
  31. rdd2: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[25] at partitionBy at :23
  32.  
  33. scala> rdd2.partitions.size
  34. res23: Int = 2
  35.  
  36. //查看rdd2中每个分区的元素
  37. scala> rdd2.mapPartitionsWithIndex{
  38. | (partIdx,iter) => {
  39. | var part_map = scala.collection.mutable.Map[String,List[(Int,String)]]()
  40. | while(iter.hasNext){
  41. | var part_name = "part_" + partIdx;
  42. | var elem = iter.next()
  43. | if(part_map.contains(part_name)) {
  44. | var elems = part_map(part_name)
  45. | elems ::= elem
  46. | part_map(part_name) = elems
  47. | } else {
  48. | part_map(part_name) = List[(Int,String)]{elem}
  49. | }
  50. | }
  51. | part_map.iterator
  52. | }
  53. | }.collect
  54. res24: Array[(String, List[(Int, String)])] = Array((part_0,List((4,D), (2,B))), (part_1,List((3,C), (1,A))))
  55. //(4,D),(2,B)在part_0中,(3,C),(1,A)在part_1中
  56.  

mapValues

def mapValues[U](f: (V) => U): RDD[(K, U)]

同基本转换操作中的map,只不过mapValues是针对[K,V]中的V值进行map操作。

  1. scala> var rdd1 = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")),2)
  2. rdd1: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[27] at makeRDD at :21
  3.  
  4. scala> rdd1.mapValues(x => x + "_").collect
  5. res26: Array[(Int, String)] = Array((1,A_), (2,B_), (3,C_), (4,D_))
  6.  

flatMapValues

def flatMapValues[U](f: (V) => TraversableOnce[U]): RDD[(K, U)]

同基本转换操作中的flatMap,只不过flatMapValues是针对[K,V]中的V值进行flatMap操作。

  1. scala> rdd1.flatMapValues(x => x + "_").collect
  2. res36: Array[(Int, Char)] = Array((1,A), (1,_), (2,B), (2,_), (3,C), (3,_), (4,D), (4,_))
  3.  

更多关于Spark算子的介绍,可参考 Spark算子 :

http://lxw1234.com/archives/tag/spark%E7%AE%97%E5%AD%90

0 0
原创粉丝点击