Spark函数讲解:collectAsMap

来源:互联网 发布:三体 死神永生 知乎 编辑:程序博客网 时间:2024/05/19 13:55
功能和collect函数类似。该函数用于Pair RDD,最终返回Map类型的结果。官方文档说明:

Return the key-value pairs in this RDD to the master as a Map.
Warning: this doesn't return a multimap (so if you have multiple values to the same key, only one value per key is preserved in the map returned)

函数原型

def collectAsMap(): Map[K, V]

实例

scala> val data = sc.parallelize(List((1, "www"), (1, "iteblog"), (1, "com"),     (2, "bbs"), (2, "iteblog"), (2, "com"), (3, "good")))data: org.apache.spark.rdd.RDD[(Int, String)] =     ParallelCollectionRDD[26] at parallelize at <console>:12scala> data.collectAsMapres28: scala.collection.Map[Int,String] = Map(2 -> com, 1 -> com, 3 -> good)
从结果我们可以看出,如果RDD中同一个Key中存在多个Value,那么后面的Value将会把前面的Value覆盖,最终得到的结果就是Key唯一,而且对应一个Value。

0 0