【云星数据---Apache Flink实战系列(精品版)】:Apache Flink批处理API详解与编程实战022--DateSet实用API详解022

来源:互联网 发布:凯瑟琳·哈尔西 知乎 编辑:程序博客网 时间:2024/06/14 09:50

Flink DateSet定制API详解(Scala版) -003

Reduce

element为粒度,对element进行合并操作。最后只能形成一个结果。

执行程序:

package code.book.batch.dataset.advance.apiimport org.apache.flink.api.common.functions.ReduceFunctionimport org.apache.flink.api.scala.{ExecutionEnvironment, _}object ReduceFunction001scala {  def main(args: Array[String]): Unit = {    // 1.设置运行环境,并创造测试数据    val env = ExecutionEnvironment.getExecutionEnvironment    val text = env.fromElements(1, 2, 3, 4, 5, 6, 7)    //2.对DataSet的元素进行合并,这里是计算累加和    val text2 = text.reduce(new ReduceFunction[Int] {      override def reduce(intermediateResult: Int, next: Int): Int = {        intermediateResult + next      }    })    text2.print()    //3.对DataSet的元素进行合并,这里是计算累乘积    val text3 = text.reduce(new ReduceFunction[Int] {      override def reduce(intermediateResult: Int, next: Int): Int = {        intermediateResult * next      }    })    text3.print()    //4.对DataSet的元素进行合并,逻辑可以写的很复杂    val text4 = text.reduce(new ReduceFunction[Int] {      override def reduce(intermediateResult: Int, next: Int): Int = {        if (intermediateResult % 2 == 0) {          intermediateResult + next        } else {          intermediateResult * next        }      }    })    text4.print()    //5.对DataSet的元素进行合并,可以看出intermediateResult是临时合并结果,next是下一个元素    val text5 = text.reduce(new ReduceFunction[Int] {      override def reduce(intermediateResult: Int, next: Int): Int = {        println("intermediateResult=" + intermediateResult + " ,next=" + next)        intermediateResult + next      }    })    text5.collect()  }}

执行结果:

text2.print()28text3.print()5040text4.print()157text5.print()intermediateResult=1 ,next=2intermediateResult=3 ,next=3intermediateResult=6 ,next=4intermediateResult=10 ,next=5intermediateResult=15 ,next=6intermediateResult=21 ,next=7

reduceGroup

对每一组的元素分别进行合并操作。与reduce类似,不过它能为每一组产生一个结果。如果没有分组,就当作一个分组,此时和reduce一样,只会产生一个结果。

执行程序:

package code.book.batch.dataset.advance.apiimport java.lang.Iterableimport org.apache.flink.api.common.functions.GroupReduceFunctionimport org.apache.flink.api.scala.{ExecutionEnvironment, _}import org.apache.flink.util.Collectorobject GroupReduceFunction001scala {  def main(args: Array[String]): Unit = {    // 1.设置运行环境,并创造测试数据    val env = ExecutionEnvironment.getExecutionEnvironment    val text = env.fromElements(1, 2, 3, 4, 5, 6, 7)    //2.对DataSet的元素进行分组合并,这里是计算累加和    val text2 = text.reduceGroup(new GroupReduceFunction[Int, Int] {      override def reduce(iterable: Iterable[Int], collector: Collector[Int]): Unit = {        var sum = 0        val itor = iterable.iterator()        while (itor.hasNext) {          sum += itor.next()        }        collector.collect(sum)      }    })    text2.print()    //3.对DataSet的元素进行分组合并,这里是分别计算偶数和奇数的累加和    val text3 = text.reduceGroup(new GroupReduceFunction[Int, (Int, Int)] {      override def reduce(iterable: Iterable[Int], collector: Collector[(Int, Int)]): Unit = {        var sum0 = 0        var sum1 = 0        val itor = iterable.iterator()        while (itor.hasNext) {          val v = itor.next          if (v % 2 == 0) {            //偶数累加和            sum0 += v          } else {            //奇数累加和            sum1 += v          }        }        collector.collect(sum0, sum1)      }    })    text3.print()    //4.对DataSet的元素进行分组合并,这里是对分组后的数据进行合并操作,统计每个人的工资总和(每个分组会合并出一个结果)    val data = env.fromElements(    ("zhangsan", 1000), ("lisi", 1001), ("zhangsan", 3000), ("lisi", 1002))    //4.1根据name进行分组,    val data2 = data.groupBy(0).reduceGroup(new GroupReduceFunction[(String, Int), (String, Int)]{      override def reduce(iterable: Iterable[(String, Int)], collector: Collector[(String, Int)]):      Unit = {        var salary = 0        var name = ""        val itor = iterable.iterator()        //4.2统计每个人的工资总和        while (itor.hasNext) {          val t = itor.next()          name = t._1          salary += t._2        }        collector.collect(name, salary)      }    })    data2.print  }}

执行结果:

text3.print()28text3.print()(12,16)data2.print(lisi,2003)(zhangsan,4000)
阅读全文
0 0