Spark算子[12]:groupByKey、cogroup、join、lookup 源码实例详解

来源:互联网 发布:北京学游戏编程 编辑:程序博客网 时间:2024/06/05 11:59

groupByKey

源码

/** * 将RDD的每个key的values分组为一个单独的序列,并且每个组内的元素的排序不定 * * @note groupByKey很消耗资源,如果要对每个Key的values进行聚合(比如求和或平均值), * 用 `aggregateByKey`或者`reduceByKey` 代替,将会更节省性能。 * * @note 在当前实现的情况下,groupByKey必须能够在内存中保存所有(K,V)对, * 如果一个key的values太多,将会产生OutOfMemoryError */def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])] = self.withScope {  // groupByKey不应该使用map side combine ,mapSideCombine = false  // 因为map side combine 并不会减少数据的shuffle次数,  // 并且要求将map side 的数据放入一张hash表中,导致 old gen中有更多的对象;  val createCombiner = (v: V) => CompactBuffer(v)  val mergeValue = (buf: CompactBuffer[V], v: V) => buf += v  val mergeCombiners = (c1: CompactBuffer[V], c2: CompactBuffer[V]) => c1 ++= c2  val bufs = combineByKeyWithClassTag[CompactBuffer[V]](    createCombiner, mergeValue, mergeCombiners, partitioner, mapSideCombine = false)  bufs.asInstanceOf[RDD[(K, Iterable[V])]]}

def groupByKey(): RDD[(K, Iterable[V])]
def groupByKey(numPartitions: Int): RDD[(K, Iterable[V])]
def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])]


Java案例

public static void groupByKey() {    SparkConf conf = new SparkConf().setAppName("groupByKey").setMaster("local");    JavaSparkContext sc = new JavaSparkContext(conf);    List<Tuple2<String, Integer>> scoreList = Arrays.asList(            new Tuple2<String, Integer>("class1", 90),            new Tuple2<String, Integer>("class2", 60),            new Tuple2<String, Integer>("class1", 60),            new Tuple2<String, Integer>("class2", 50)    );    //JavaPairRDD<String,Integer> pairRDD = sc.parallelizePairs(scoreList);    JavaRDD<Tuple2<String, Integer>> rdd = sc.parallelize(scoreList);    JavaPairRDD<String, Integer> pairRDD = JavaPairRDD.fromJavaRDD(rdd);    JavaPairRDD<String, Iterable<Integer>> resRdd = pairRDD.groupByKey();    //只有在预期结果数据很小的情况下,才应该使用此方法collectAsMap(),    //因为所有的数据都将拉回driver的内存中。    Map<String, Iterable<Integer>> map = resRdd.collectAsMap();    for (String k : map.keySet()) {        System.out.println("key:" + k);        Iterator<Integer> iterator = map.get(k).iterator();        while (iterator.hasNext()) {            System.out.println(iterator.next());        }    }    sc.close();}

cogroup

def cogroup[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (Iterable[V], Iterable[W]))]
def cogroup[W1, W2](other1, other2, numPartitions): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))]
def cogroup[W1, W2, W3]

def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))]
def cogroup[W1, W2](other1, other2): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))]
def cogroup[W1, W2, W3]

def cogroup[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (Iterable[V], Iterable[W]))]
def cogroup[W1, W2](other1, other2, partitioner): RDD[(K, (Iterable[V], Iterable[W1], Iterable[W2]))]
def cogroup[W1, W2, W3]

cogroup相当于SQL中的全外关联full outer join,返回左右RDD中的记录,关联不上的为空。
参数numPartitions用于指定结果的分区数。
参数partitioner用于指定分区函数。


参数为1个RDD的例子

val rdd1 = sc.parallelize(Array(("A","1"),("B","2"),("C","3")),2)val rdd2 = sc.parallelize(Array(("A","a"),("C","c"),("D","d")),2)val res = rdd1.cogroup(rdd2)res.foreach(println)

结果:
(B,(CompactBuffer(2),CompactBuffer()))
(D,(CompactBuffer(),CompactBuffer(d)))
(A,(CompactBuffer(1),CompactBuffer(a)))
(C,(CompactBuffer(3),CompactBuffer(c)))


参数为2个RDD的例子

var rdd1 = sc.makeRDD(Array(("A","1"),("B","2"),("C","3")),2)var rdd2 = sc.makeRDD(Array(("A","a"),("C","c"),("D","d")),2)var rdd3 = sc.makeRDD(Array(("A","A"),("E","E")),2)scala> var rdd4 = rdd1.cogroup(rdd2,rdd3)rdd4: org.apache.spark.rdd.RDD[(String, (Iterable[String], Iterable[String], Iterable[String]))] = MapPartitionsRDD[17] at cogroup at :27scala> rdd4.partitions.sizeres7: Int = 2scala> rdd4.collectres9: Array[(String, (Iterable[String], Iterable[String], Iterable[String]))] = Array((B,(CompactBuffer(2),CompactBuffer(),CompactBuffer())), (D,(CompactBuffer(),CompactBuffer(d),CompactBuffer())), (A,(CompactBuffer(1),CompactBuffer(a),CompactBuffer(A))), (C,(CompactBuffer(3),CompactBuffer(c),CompactBuffer())), (E,(CompactBuffer(),CompactBuffer(),CompactBuffer(E))))

参数为3个RDD的例子原理同上


join

def join[W](other: RDD[(K, W)]): RDD[(K, (V, W))]
def join[W](other: RDD[(K, W)], numPartitions: Int): RDD[(K, (V, W))]
def join[W](other: RDD[(K, W)], partitioner: Partitioner): RDD[(K, (V, W))]

join相当于SQL中的内关联join,只返回两个RDD根据K可以关联上的结果,join只能用于两个RDD之间的关联,如果要多个RDD关联,多关联几次即可。

参数numPartitions用于指定结果的分区数;
参数partitioner用于指定分区函数;

var rdd1 = sc.makeRDD(Array(("A","1"),("B","2"),("C","3")),2)var rdd2 = sc.makeRDD(Array(("A","a"),("C","c"),("D","d")),2)scala> rdd1.join(rdd2).collectres10: Array[(String, (String, String))] = Array((A,(1,a)), (C,(3,c)))

lookup

def lookup(key: K): Seq[V]

lookup用于(K,V)类型的RDD,指定K值,返回RDD中该K对应的所有V值。返回一个WrappedArray【包装类数组】

scala> var rdd1 = sc.makeRDD(Array(("A",0),("A",2),("B",1),("B",2),("C",1)))rdd1: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[0] at makeRDD at :21scala> rdd1.lookup("A")res0: Seq[Int] = WrappedArray(0, 2)scala> rdd1.lookup("B")res1: Seq[Int] = WrappedArray(1, 2)

注:以上cogroup,join,lookup的案例,参考于:http://lxw1234.com/archives/2015/07/384.htm

阅读全文
1 0